Google's New AI-Generated Responses Spark Controversy: Inaccuracies and Technical Changes

San Francisco, California United States of America
Controversy arose due to inaccuracies and inconsistencies, such as advising users to put glue on pizza or stating that Barack Obama was Muslim.
Google introduced AI-generated responses called 'AI Overviews' for specific search queries.
Google made over a dozen technical changes, including pausing some health-related answers and limiting the use of social media postings as sources.
The introduction of AI Overviews marks a significant shift in how information is accessed and presented online.
These responses are created using Google's AI model Gemini and compile information from various sources.
Google's New AI-Generated Responses Spark Controversy: Inaccuracies and Technical Changes

Google's search engine, a pioneer in shaping the modern internet, recently introduced an innovative feature: AI-generated responses called 'AI Overviews.' These responses appear for specific search queries above the usual list of blue links. Google's AI model Gemini compiles information from various sources to create these comprehensive answers.

However, this new addition has sparked controversy due to some inaccuracies and inconsistencies. For instance, users reported receiving advice to put glue on pizza or stating that Barack Obama was Muslim as AI Overviews.

Google responded by making over a dozen technical changes aimed at improving the system. They paused some answers on health-related topics and added triggering restrictions for queries where AI Overviews were not effective. The company also limited the use of social media postings as sources for these responses.

The introduction of AI Overviews marks a significant shift in how information is accessed and presented online, raising questions about the role of artificial intelligence in shaping our digital landscape. Google's competitors, such as Yahoo, are closely watching this development to adapt and stay competitive.



Confidence

85%

Doubts
  • Were all reported inaccuracies verified?
  • What specific technical changes were made and how effective are they?

Sources

97%

  • Unique Points
    • AI Overviews are designed to work alongside traditional search tasks and include relevant links for further exploration.
    • Accuracy is paramount in AI Overviews, so they show information that is backed up by top web results.
    • Before launching, AI Overviews were extensively tested including red-teaming efforts and evaluations with samples of typical user queries.
  • Accuracy
    • Google is constantly making improvements to its search experience, including updates that can help broad sets of queries and new ones that haven’t been seen yet.
    • Google made over a dozen technical changes aimed at improving the system after high-profile errors.
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

93%

  • Unique Points
    • Google rolled out AI search results for millions of users last week
    • Google removed some inaccurate AI results but damage was already done
    • Google is fixing AI Overviews by limiting when they appear for nonsensical queries and satire
  • Accuracy
    • AI delivered inaccurate results, including suggesting to put glue on pizza and eat rocks
    • Google blames ‘data voids’ and odd questions for the inaccurate results
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (90%)
    The author makes an appeal to authority by quoting Liz Reid, the head of search at Google. She is quoted explaining the cause of inaccurate AI results and defending their usefulness. The author also uses inflammatory rhetoric by describing some of the AI results as 'weird' and 'nonsensical'. However, no formal or dichotomous fallacies were found.
    • ]The damage [was already done].[/
    • Reid argues that AI Overviews generally don’t ‘Hallucinate’; they just sometimes misinterpret what’s already on the web.[
    • Google is racing to compete against OpenAI and AI search startups like Perplexity, which is already worth a rumored $3 billion.
  • Bias (95%)
    The author expresses a negative opinion towards Google's AI search results and the company as a whole. She uses language that depicts the AI results as erroneous and nonsensical, implying that they are unreliable. The author also implies that Google is racing to compete against other companies in the industry but failing to maintain user trust.
    • Google's introduction of AI Overviews has been yet another PR blunder for the company.
      • Google worked quickly to remove some inaccurate AI results, but the damage – and meme-ification – was already done.
        • Part of Reid’s blog also compares AI Overviews to another longstanding Search feature called featured snippets, but she implies that the accuracy rate for AI Overviews is lower than that of featured snippets.
          • Reid argues that AI Overviews generally don’t ‘Hallucinate’; they just sometimes misinterpret what’s already on the web.
          • Site Conflicts Of Interest (100%)
            None Found At Time Of Publication
          • Author Conflicts Of Interest (100%)
            None Found At Time Of Publication

          81%

          • Unique Points
            • Google is scaling back the use of AI-generated answers in some search results after errors including telling users to put glue on pizza and stating Barack Obama was Muslim.
            • Google made over a dozen technical changes aimed at improving the system after high-profile errors.
          • Accuracy
            No Contradictions at Time Of Publication
          • Deception (30%)
            The article contains selective reporting as it only mentions errors made by Google's AI-generated answers without mentioning any potential benefits or context. It also uses emotional manipulation by describing some of the errors as 'concerning'.
            • One answer, which Google has since fixed, told people to drink plenty of urine to help pass a kidney stone.
            • The tech industry is in the throes of an AI revolution, with start-ups and Big Tech giants alike trying to find new ways to put the tech into their products and make money from it.
            • But journalists, search engine experts and social media users quickly began spotting problems with the answers. Some of the responses were funny while others were concerning.
          • Fallacies (90%)
            The author makes several statements in the article that are not fallacious, but there are a few instances of inflammatory rhetoric and an appeal to authority. The author states that 'users and search engine experts began noticing that far fewer queries were triggering an AI answer compared with previous days.' This is a statement of fact based on observations made by users and experts. However, the author also uses inflammatory language when describing some of the errors made by Google's AI, such as telling users to put glue on their pizza and saying Barack Obama was Muslim. These statements are not fallacious in themselves, but they are intended to elicit an emotional response from the reader. The author also appeals to authority when quoting Liz Reid's confirmation that Google is scaling back some of the AI answers and the changes Google has made to improve the system. This is a valid use of authority as it comes directly from a source with expertise in the subject matter.
            • ]Google said it was scaling down the use of AI-generated answers in some search results, after the tech made high-profile errors including telling users to put glue on their pizza and saying Barack Obama was Muslim.[
            • But journalists, search engine experts and social media users quickly began spotting problems with the answers. Some of the responses were funny while others were concerning.
            • Google tried to test the tool as much as it could before the broader rollout, but Reid said the full-scale launch revealed many situations the company hadn’t prepared for.
          • Bias (90%)
            The author expresses a negative opinion towards Google's AI-generated answers in search results and implies that they are error-prone and potentially harmful. This is an example of bias against a specific company.
            • Google said it was scaling down the use of AI-generated answers in some search results, after the tech made high-profile errors including telling users to put glue on their pizza and saying Barack Obama was Muslim.
              • The change is the latest example of Google launching an AI product with fanfare and then rolling it back after it goes awry.
              • Site Conflicts Of Interest (100%)
                None Found At Time Of Publication
              • Author Conflicts Of Interest (100%)
                None Found At Time Of Publication

              75%

              • Unique Points
                • Google's search engine has been the most important force in shaping the modern internet.
                • Google recently began rolling out AI-generated responses called ‘AI Overviews’ for certain search queries.
              • Accuracy
                • Google expects to open up AI Overviews to at least 1 billion global users by the end of the year.
              • Deception (30%)
                The article contains selective reporting as it focuses on the negative aspects of Google's new AI-powered search feature, while ignoring the potential benefits. It also uses emotional manipulation by painting a dire picture of the consequences of this new feature for online publishers and internet economy.
                • The economy of the internet is at risk
                • Google doesn’t care about informing its users anymore
              • Fallacies (85%)
                The article contains a few fallacies. It includes a dichotomous depiction of Google as either an all-knowing entity that can solve all problems or as a company that no longer cares about informing its users. There are also appeals to authority from tech experts and industry insiders, without considering the broader context or counterarguments. Additionally, inflammatory rhetoric is used when discussing potential negative consequences of AI Overviews.
                • The world’s most important knowledge engine… could be populated with unreliable… falsehoods.
                • Google doesn’t care about informing its users anymore...
                • Providing a sturdy, almost necessary web-search service is no longer the priority...
                • By making it even less inviting for humans to contribute to the web’s collective pool of knowledge, Google’s summary answers could also leave its own and everyone else’s AI tools with less accurate, less timely, and less interesting information.
                • Users will stop clicking on the links that also surface...
                • No matter how much Big Tech pushes AI, human-made content will ultimately win out.
              • Bias (100%)
                None Found At Time Of Publication
              • Site Conflicts Of Interest (100%)
                None Found At Time Of Publication
              • Author Conflicts Of Interest (100%)
                None Found At Time Of Publication