Google's New AI Search Feature: The Pizza Glue Incident and the Importance of User Feedback

United States of America
Google's new AI search feature, AI Overviews, generates incorrect responses for some user queries.
Google states most summaries provide high-quality information but errors are rare.
One such query resulted in the suggestion to add glue to pizza sauce to prevent cheese from sliding off.
Social media users sharing incorrect responses can help identify and correct issues with AI systems.
This response is based on an old joke and not a reliable solution.
Google's New AI Search Feature: The Pizza Glue Incident and the Importance of User Feedback

Google's new AI search feature, known as AI Overviews, has been generating incorrect and sometimes humorous responses for some user queries. One such query resulted in the suggestion to add glue to pizza sauce to prevent cheese from sliding off. However, this response is based on an old joke from a comedy blog and not a reliable solution.

Google's AI Overviews feature summarizes search results and draws information from multiple sources, but its accuracy varies. The company has stated that most summaries provide high-quality information, but errors like the pizza glue advice are rare.

The phenomenon of social media users memeing these failures could actually serve as useful feedback for companies developing AI. By sharing incorrect responses and highlighting their absurdity, users can help identify and correct issues with AI systems.

Google's new feature is part of a larger trend in the tech industry towards using AI to generate summaries or answers for user queries instead of simply providing links to relevant webpages. However, as this incident shows, there are pitfalls to this approach and it's important for users to be aware that AI-generated responses may not always be accurate.

Google is not the only company experimenting with AI search features. For example, OpenAI's ChatGPT has also been known to generate incorrect or misleading information in some cases. It's crucial for users to approach these tools with a healthy dose of skepticism and fact-check any information they receive before acting on it.

The pizza glue incident is just one example of the challenges and opportunities presented by AI search features. As these technologies continue to evolve, it will be important for companies to prioritize accuracy, transparency, and user feedback in their development.



Confidence

85%

Doubts
  • Are there any other instances of incorrect responses from Google's AI Overviews feature?
  • Is the pizza glue advice a common occurrence or an isolated incident?

Sources

83%

  • Unique Points
    • Google’s new AI search feature provides incorrect and humorous responses, pulling information from comedy blogs and unconventional sources.
    • Social media users have started to meme these failures, which could actually serve as useful feedback for companies developing AI.
    • Google’s AI has given dangerous advice in some cases, such as incorrect information about treating a rattlesnake bite or misidentifying a poisonous mushroom as a common white button mushroom.
    • When a bad AI response goes viral, it can confuse the AI further by feeding it its own mistakes.
  • Accuracy
    • Google's new AI search feature provides incorrect and humorous responses, pulling information from comedy blogs and unconventional sources.
    • Google's AI has given dangerous advice in some cases, such as incorrect information about treating a rattlesnake bite or misidentifying a poisonous mushroom as a common white button mushroom.
  • Deception (30%)
    The article contains selective reporting and sensationalism. The author focuses on the ridiculous AI responses from Google and other companies without providing any context or explanation as to why these errors occurred. The author also implies that these errors are more common than they actually are by stating 'despite the high-profile nature of these flaws, tech companies often downplay their impact.' This is an exaggeration and an attempt to sensationalize the issue. Additionally, the author uses emotional manipulation by implying that AI content deals may be overvalued and that incorrect information from AI can be dangerous.
    • This is an incredible blunder.
    • To Google's credit, a lot of the errors that are circulating on social media come from unconventional searches designed to trip up the AI.
    • Despite the high-profile nature of these flaws, tech companies often downplay their impact.
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

78%

  • Unique Points
    • Google’s new AI Overviews feature provides incorrect information for uncommon queries.
    • The pizza glue query response is based on a decade-old joke from a Reddit thread.
  • Accuracy
    • Google's new AI Overviews feature provides incorrect information for uncommon queries.
    • Google promised a better search experience – now it's telling us to put glue on our pizza.
  • Deception (30%)
    The article contains several examples of selective reporting and sensationalism. The author focuses on a few instances where Google's new AI feature provides incorrect or misleading information, implying that these errors are common and representative of the product as a whole. However, the author fails to mention that these instances are 'isolated examples' as acknowledged by Google itself. By focusing solely on the mistakes and exaggerating their significance, the author creates a sensational narrative that may mislead readers into believing that Google's AI is consistently inaccurate.
    • The feature, while not triggered for every query, scans the web and drums up an AI-generated response. The answer received for the pizza glue query appears to be based on a comment from a user named 'fucksmith' in a more than decade-old Reddit thread, and they're clearly joking.
    • It also claims that former US President James Madison graduated from the University of Wisconsin not once but 21 times, that a dog has played in the NBA, NFL, and NHL, and that Batman is a cop.
  • Fallacies (80%)
    The author makes an appeal to authority fallacy when quoting Google spokesperson Meghann Farnsworth's statement about the mistakes being 'isolated examples' and not representative of most people's experiences. The author also uses inflammatory rhetoric by stating that the search experience seems 'dumber than before'.
    • “But it’s clear these tools aren’t ready to accurately provide information at scale.”
    • “What’s the point, though, if the search seems dumber than before?”
  • Bias (95%)
    The author expresses a critical and skeptical tone towards Google's new AI Overviews feature, implying potential monetary bias as the company is profiting from this technology. The author also uses language that depicts the mistakes made by the AI as extreme or unreasonable, demonstrating ideological bias.
    • But it's clear these tools aren’t ready to accurately provide information at scale.
      • Many idealists believe we are on the brink of something great and that these issues are simply the growing pains of a nascent technology. I sure hope they’re right. But one thing is certain: we’ll likely witness someone putting glue on their pizza soon, because that’s the nature of the internet.
      • Site Conflicts Of Interest (100%)
        None Found At Time Of Publication
      • Author Conflicts Of Interest (100%)
        None Found At Time Of Publication

      80%

      • Unique Points
        • Google’s AI Overviews feature suggested a user put glue on pizza to prevent cheese from sliding off.
        • Google started testing the AI Overviews feature in the US and the UK earlier this year and plans to roll it out more widely by the end of 2024.
      • Accuracy
        • Google's AI Overviews feature suggested a user put glue on pizza to prevent cheese from sliding off.
        • Google's new search feature, AI Overviews, generates summaries of search results using AI technology.
      • Deception (30%)
        The article reports on Google's AI Overviews feature suggesting to use glue on pizza to keep cheese from sliding off. This is an example of selective reporting as the article only reports this inaccurate response without mentioning that it is a rare occurrence and that most summaries provide high-quality information. The author also uses emotional manipulation by stating 'the pitfalls of using the AI feature to search for information' and 'highlights the pitfalls' which creates a negative tone towards the AI feature.
        • Google's new search feature, AI Overviews, seems to be going awry.
        • The pizza glue advice highlights the pitfalls of using the AI feature to search for information.
      • Fallacies (100%)
        None Found At Time Of Publication
      • Bias (100%)
        None Found At Time Of Publication
      • Site Conflicts Of Interest (100%)
        None Found At Time Of Publication
      • Author Conflicts Of Interest (100%)
        None Found At Time Of Publication

      90%

      • Unique Points
        • Google has shifted from providing a list of links to generating written answers for some user searches using AI.
        • Google’s AI Overview summarizes search results and draws information from multiple sources.
        • Section 230 of the 1996 Communications Decency Act protects companies like Google from liability over third-party content, but its application to AI-generated search answers is unclear.
      • Accuracy
        • Google's AI Overview summarizes search results and draws information from multiple sources.
        • Google has an incentive to present its AI-generated answers as authoritative but may include disclaimers or avoid generating responses on controversial topics.
      • Deception (100%)
        None Found At Time Of Publication
      • Fallacies (80%)
        The author makes an appeal to authority by quoting Samir Jain from the Center for Democracy and Technology regarding the application of Section 230 of the Communications Decency Act to AI-generated search answers. The author also uses inflammatory rhetoric when describing some of the potential consequences of AI-generated search answers, such as 'hallucinations' and 'bad information.'
        • 'If you have an AI overview that contains a hallucination,' it’s a little difficult to see how that hallucination wouldn’t have at least in part been created or developed by Google,' Jain said.
        • Eating the source code of the internet
      • Bias (100%)
        None Found At Time Of Publication
      • Site Conflicts Of Interest (100%)
        None Found At Time Of Publication
      • Author Conflicts Of Interest (100%)
        None Found At Time Of Publication

      80%

      • Unique Points
        • Google AI provided pizza advice based on an 11-year-old Reddit comment with 8 upvotes
        • Google AI has previously provided incorrect information, such as James Webb Space Telescope discoveries
        • Reddit is a popular site that many people add after their search query, leading Google AI to pick up popular but not necessarily accurate answers
        • Marketing bots like ReplyGuy can influence Google AI’s responses by promoting products in Reddit comments
      • Accuracy
        No Contradictions at Time Of Publication
      • Deception (30%)
        The author makes editorializing statements and uses emotional manipulation by expressing shock and disbelief at Google AI's reliance on an 11-year-old Reddit comment for pizza advice. The author also uses selective reporting by focusing only on the negative aspects of the situation, implying that this is not the first time AI has failed to deliver accurate information.
        • Internet users were shocked by Google AI’s choice of source when it comes to pizza advice.
        • Those against AI rejoiced: it seems we are still far away from machines taking away all our jobs.
        • This is not the first time AI has failed at delivering truthful, sensible answers.
      • Fallacies (85%)
        The author commits an appeal to ignorance fallacy when she states 'This is not the first time AI has failed at delivering truthful, sensible answers.' She is making a claim without providing evidence that Google's AI has consistently failed in the past. Additionally, there is an example of a dichotomous depiction when the author describes those against AI as 'AI haters'. This oversimplifies and polarizes the issue.
        • This is not the first time AI has failed at delivering truthful, sensible answers.
        • those against AI rejoiced: it seems we are still far away from machines taking away all our jobs.
      • Bias (90%)
        The author expresses a negative opinion towards AI and its ability to provide accurate information. She uses language that depicts the AI as unhinged and unreliable. The author also quotes an 11-year-old comment from Reddit with a small number of upvotes, implying that the comment is not trustworthy or expert advice, yet the AI relied on it to formulate its answer. This demonstrates a bias against AI.
        • The main problem is that Reddit is such a popular place that many people simply add the name of the site after their search query, and this led to Google’s AI picking up whatever was popular, without checking the legitimacy of the answer.
          • This sets an interesting precedent.
          • Site Conflicts Of Interest (100%)
            None Found At Time Of Publication
          • Author Conflicts Of Interest (100%)
            None Found At Time Of Publication