• The Upgrade
  • Posts
  • Google’s AI Overviews Suck Rocks 🪨

Google’s AI Overviews Suck Rocks 🪨

Plus, parenting in the AI era, special AI course offer, and top AI headlines!

Welcome to The Upgrade

Welcome to my weekly newsletter, which focuses on the intersection of AI, media, and storytelling. A special welcome to my new readers from Dallas News, Blackstone, Capital One, and many other top organizations—you’re in good company!

In today’s issue:

  • The Week’s Top AI Stories 📰

  • 🎓 20% off the MindStudio Academy: Learn AI Today! ⚡️

  • 🎧Podcast: Parenting in the AI Era with Erin McNeill, founder of Media Literacy Now

  • 🧐 The Big Think: Google’s AI Overviews Suck Rocks 🪨 

The Week’s Top AI Stories

Top AI Headlines

  • The New ChatGPT Offers a Lesson in A.I. Hype — The New York Times

  • Google Pumps Brakes on AI Overviews Search After Telling Us to Eat Glue, Rocks — CNET

  • Viral ‘All Eyes on Rafah’ image inspires wave of AI-generated Instagram posts about Israel-Hamas war — NBC News

  • Lonely Teens Are Making "Friends" With AIs — Futurism

  • Elon Musk raises $6 billion for his AI startup xAI — Fast Company

Ethics, Society, & Safety

  • AI tutors are quietly changing how kids in the US study, and the leading apps are from China — TechCrunch

  • Google's AI Overviews Will Always Be Broken. That's How AI Works — WIRED

  • What a viral, fake image of Rafah tells us about AI propaganda — The Washington Post

  • Google's call-scanning AI could dial up censorship by default, privacy experts warn — TechCrunch

Legal, Policy & Copyright

  • Sure, You’ve Got an AI Policy, But Is Everyone Following It? — Bloomberg Law

  • The Senate’s failure on AI policy leaves legislation up to the states — The Hill

  • How the White House’s Executive Order on AI may impact the law — Fast Company

  • Can we afford to let AI companies ask for forgiveness instead of permission? — Fortune

AI in the Workplace

  • Tech Workers Retool for Artificial-Intelligence Boom — The Wall Street Journal

  • AI career coaches are here. Should you trust them? — The Washington Post

  • How Generative AI Will Change The Jobs Of Artists And Designers — Forbes

  • You don’t have to be a programmer to cash in on artificial intelligence. AI skills in these non-tech professions come with massive wage increases — Fortune

AI Tools

  • Pocket-Sized AI Models Could Unlock a New Era of Computing — WIRED

  • Humane AI Pin is a disaster: Founders already want to sell the company — Ars Technica

  • Anthropic takes a look into the ‘black box’ of AI models — Fast Company

🎓Learn AI with MindStudio Academy! 💻

Ready to learn the fastest way to build no-code AI-powered apps and automations? The Upgrade is partnering with MindStudio to lead the MindStudio Academy! ⚡️

The next cohort starts on June 15th at 10 am ET / 7 am PT. Only a few spots left, so don’t wait!

SAVE 20% with code: THEUPGRADE20

💡 The Big Think: Google’s AI Overviews Suck Rocks 🪨  

Google's recent rollout of AI Overviews has been nothing short of a public relations disaster, revealing serious flaws in the company's approach to integrating generative AI into its core search functions. The feature, designed to provide concise summaries of search results, has instead produced a series of bizarre and inaccurate responses that have gone viral, sparking widespread criticism and concern. This debacle underscores the broader challenges and risks associated with the reckless deployment of AI technologies in critical information services.

Google’s AI Overviews have some flaws…

One of the most egregious examples of AI Overviews' failures was its suggestion to add non-toxic glue to pizza sauce to prevent cheese from sliding off, a recommendation sourced from an old Reddit joke. Similarly, the AI advised users to eat rocks daily, based on a satirical article from The Onion. These instances, while humorous, highlight a deeper issue: the AI's inability to discern credible information from satire or misinformation. As reported by Fast Company, these errors are not just isolated incidents but indicative of a more systemic problem with Google's AI capabilities.

The backlash has been swift and severe. According to The Guardian, Google has been forced to impose restrictions on the types of searches that will generate AI-written summaries and limit the inclusion of satirical and humorous content. This reactive approach, however, raises questions about the robustness of Google's testing and quality assurance processes. If such glaring errors could slip through, what other, less obvious inaccuracies might be lurking in the AI's responses?

TechCrunch's analysis points out that Google's AI Overviews are fundamentally flawed because they attempt to automate a task that inherently requires human judgment and context. The AI's reliance on user-generated content from platforms like Reddit further exacerbates the problem, as these sources are often unreliable and contextually inappropriate for generating factual summaries. This issue is compounded by the AI's tendency to present information with unwarranted confidence, misleading users into accepting falsehoods as facts.

The controversy has also sparked a broader debate about the role of AI in information dissemination. As WIRED notes, the integration of AI into search engines like Google fundamentally alters the relationship between information seekers and providers, disrupting the ability to establish and maintain trust in the information ecosystem. This disruption is particularly concerning, given the potential for AI to amplify misinformation and erode public trust in digital information sources.

Despite these challenges, Google remains committed to refining its AI Overviews. As Forbes reported, the company has implemented several technical improvements, including better detection mechanisms for nonsensical queries and limiting the use of user-generated content in responses. However, these measures may not be sufficient to restore user confidence. The fundamental issue lies in the AI's design and reliance on probabilistic models, which are prone to generating plausible but incorrect answers.

Google's AI Overviews fiasco highlights the need for more rigorous testing and validation of AI systems, especially those deployed in critical information services. As Google continues to refine its AI capabilities, it must prioritize transparency, accountability, and user trust. Only by addressing these core issues can the company hope to mitigate the risks associated with AI and harness its potential for positive impact. In the meantime, double-check your recipes!

Don’t be shy—hit reply if you have thoughts or feedback. I’d love to connect with you!

Until next week,

Psst… Did someone forward this to you? Subscribe here!

Kris KrügVancouver AI

Reply

or to participate.