Google's Controversial AI-Generated Answers: Inaccuracies and Impact on Media Industry

Mountain View, California United States of America
AI Overviews appeared on over 84% of queries but now only show up for approximately 15%.
Google reduced the appearance of AI-generated answers, or AI Overviews, following performance issues and inaccurate responses.
Google's AI could reduce the need to click through to original articles, further eroding traffic and revenue for news publishers.
One instance involved incorrect health information sourced from trusted websites like Mayo Clinic and CDC.
Google's Controversial AI-Generated Answers: Inaccuracies and Impact on Media Industry

Google's AI-generated answers to search queries, known as AI Overviews, have been a topic of controversy in recent weeks due to their accuracy and potential impact on the media industry. According to various sources, including Search Engine Land and Wired, Google significantly reduced the appearance of AI Overviews following performance issues and a series of incorrect or misleading answers.

One such instance involved Google sourcing responses to health queries from trusted websites like the Mayo Clinic and CDC but still producing inaccurate information. For example, an AI Overview suggested eating rocks for nutrition or making pizza with glue.

The reduction in AI Overviews occurred around mid-April and continued into May, according to data from BrightEdge. At one point, AI Overviews appeared on over 84% of queries but now only show up for approximately 15%. The drop was most noticeable for queries in the healthcare industry.

Google announced the launch of AI Overviews in the US following numerous incorrect and dangerous answers. Despite this, some users have been unable to turn off AI Overviews through settings. Instead, they have resorted to using different web browsers or the Web tab in Google search results to avoid them.

The controversy surrounding AI Overviews raises questions about the role of technology in journalism and information dissemination. Some argue that it could lead to a loss of depth and nuance in understanding, while others see it as an opportunity for more efficient and convenient access to information.

E.B. White's story 'Irntog' from 1935 warns of a future where people demand increasingly condensed versions of knowledge, leading to a loss of depth and nuance in understanding. Nearly one third of US newspapers have gone out of business since 2005, leaving thousands of communities without a local news source.

Digital subscriptions and digital ads have been growing for some outlets, but Google's AI could reduce the need to click through to original articles, further eroding traffic and revenue for news publishers. It remains to be seen how this trend will impact the future of robust journalism and the media industry as a whole.



Confidence

91%

Doubts
  • Could not independently verify all instances of incorrect AI Overviews.
  • Unclear why some users cannot turn off AI Overviews through settings.

Sources

96%

  • Unique Points
    • Google significantly reduced the appearance of AI Overviews after launch due to performance issues.
    • Google sources its responses to health queries from trusted websites like the Mayo Clinic and CDC.
  • Accuracy
    • AI Overviews appeared on around 84% of BrightEdge’s searches before being opened up to all users, but no significant difference was seen between beta test and non-beta test groups.
    • Health care queries are the most common topic for AI Overviews, appearing on around 63% of such queries.
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

95%

  • Unique Points
    • Google’s AI Overviews are AI-generated answers to questions asked on Google search.
    • Google uses its Gemini generative AI model to create these summaries.
    • AI Overviews cannot be turned off through settings, but users can use workarounds such as using a different web browser or the Web tab in Google search results to avoid them.
  • Accuracy
    • Google's AI Overviews are AI-generated answers to questions asked on Google search.
    • AI Overviews started appearing at the top of Google search for everyone in the US on May 14, 2024.
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (85%)
    The article contains several informal fallacies and an example of a dichotomous depiction. The author states that Google's AI Overviews are 'not always accurate' and provides examples of inaccurate information provided by the AI. However, the author does not provide any evidence or citations to support these claims, making it an appeal to anecdote fallacy. Additionally, the author states that 'Google may not give us an obvious way to turn off AI Overviews in Google search,' which is a false dichotomy as there are workarounds mentioned in the article. Lastly, the author uses inflammatory rhetoric when describing some of the errors made by AI Overviews, such as suggesting eating pizza with glue and drinking urine.
    • Google's Gemini generative AI model powers these summaries, but Gemini -- like AI right now -- is not always accurate.
    • For example, if you typed,¢What's the shortest war in history?" in Google search, you may see something about the Anglo-Zanzibar War of 1896, thanks to AI Overviews. Underneath the AI-generated summary, you'll see links to all the resources used, which you can click to check out the websites where the information is pulled from.
    • So, is there any other way to get rid of AI Overviews? You can't turn off AI Overviews, but you can do this...
    • The third workaround, which also only works on your computer, is to use this Hide Google AI Overviews extension for Chrome. If you're a Chrome user and don't want to use any other browser but also don't want AI Overviews, this extension removes all AI-generated summaries from your Google search results.
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (0%)
    None Found At Time Of Publication

79%

  • Unique Points
    • Last week, an AI Overview search result from Google used a WIRED article in an unexpected way.
    • The AI Overview answer contained a first paragraph that directly pulled from the article.
    • Google acknowledged that AI-generated summaries may use portions of writing directly from web pages but defended it as referencing back to the original sources.
    • The first paragraph of the AI Overview was not directly attributed to the author in this instance.
  • Accuracy
    • ]The AI Overview answer contained a first paragraph that directly pulled from the article.[
    • Google claimed that links included in AI Overviews get more clicks than traditional web listings, but no data was provided to support this claim.
  • Deception (30%)
    The author expresses concern over Google's AI Overview feature using portions of his article without proper attribution. The first paragraph of the AI Overview directly copies the author's writing, but it is not attributed to him. Instead, it is presented as a conceptual match and footnoted at the bottom with a link to the original source. This practice reduces incentives for users to click through to the original article and buries attribution. The author also mentions that his article was often featured as a snippet at the top of Google search results before, but now it is pushed beneath the AI Overview answer.
    • The following screenshot on the left is from an interview I conducted with one of Anthropic’s product developers about tips for using the company’s Claude chatbot. The screenshot on the right is a portion of Google’s AI Overview that answered a question about using Anthropic’s chatbot.
    • Reece Rogers via Google Without the AI Overviews enabled, my article was often the featured snippet highlighted at the top of Google search results, offering a clear link for curious users to click on when they were looking for advice about using the Claude chatbot. During my initial tests of Google’s new search experience, the featured snippet with the article still appeared for relevant queries, but it was pushed beneath the AI Overview answer that pulled from my reporting and inserted aspects of it into a 10-item bulleted list.
  • Fallacies (80%)
    The author makes an appeal to authority by quoting a Google spokesperson and discussing their perspective on the situation. The author also uses inflammatory rhetoric by comparing the AI's behavior to that of a classroom cheater.
    • >Google spokesperson acknowledged that the AI-generated summaries may use portions of writing directly from web pages, but they defended AI Overviews as conspicuously referencing back to the original sources.
    • It feels reminiscent of a classroom cheater who copied an answer from my homework and barely even bothered to switch up the phrasing.
  • Bias (90%)
    The author expresses concern that Google's AI Overview feature is using his original work without proper attribution and reducing the incentive for users to click through to the source material. The author also mentions that he disagrees with Google's characterization of the result as just a 'conceptual match' of his writing, and expresses skepticism about whether he could win a hypothetical copyright infringement lawsuit due to the fact-based nature of his writing.
    • Reece Rogers via Google I disagree with Google’s characterization that the result may be just a ‘conceptual match’ of my writing. It goes further.
      • Reece Rogers via Google I'm definitely not the first person to suggest focusing on your intended audience when writing chatbot prompts, so I agree that the fact-based aspect of my writing does complicate the overall situation. It’s hard for me, though, to imagine a world where Google arrives at that exact paragraph about Claude’s chatbot in its AI Overview results without referencing my work first.
        • Reece Rogers via Google Without the AI Overviews enabled, my article was often the featured snippet highlighted at the top of Google search results, offering a clear link for curious users to click on when they were looking for advice about using the Claude chatbot. During my initial tests of Google’s new search experience, the featured snippet with the article still appeared for relevant queries, but it was pushed beneath the AI Overview answer that pulled from my reporting and inserted aspects of it into a 10-item bulleted list.
        • Site Conflicts Of Interest (100%)
          None Found At Time Of Publication
        • Author Conflicts Of Interest (100%)
          None Found At Time Of Publication

        80%

        • Unique Points
          • E.B. White’s story ‘Irntog’ from 1935 warns of a future where people demand increasingly condensed versions of knowledge, leading to a loss of depth and nuance in understanding.
          • Nearly one third of US newspapers have gone out of business since 2005, leaving thousands of communities without a local news source.
          • Digital subscriptions and digital ads have been growing for some outlets, but Google’s AI could reduce the need to click through to original articles, further eroding traffic and revenue for news publishers.
        • Accuracy
          • Google’s AI Overviews feature provides AI-generated summaries at the top of search results, potentially reducing the need for users to click through to original articles.
          • Newsroom employment in the US dropped by 26% between 2008 and 2020, with newspapers being hit the hardest.
          • AI summarizations can sometimes provide incorrect or dangerously inaccurate information due to lack of human intervention.
          • Google has scaled back some of its summarization results in certain areas and is working to fix issues, according to Liz Reid, Head of Google Search.
          • OpenAI has been sued for copyright infringement by The New York Times and other news organizations have signed licensing deals with the company.
        • Deception (30%)
          The article makes several statements that are not deceptive on their own but lean towards sensationalism and selective reporting. The author uses emotional manipulation by painting a dire picture of the future of journalism and news consumption, implying that AI summaries will lead to a loss of nuanced understanding and critical thinking. However, there is no clear evidence presented in the article to support this claim beyond anecdotes and speculation.
          • Another result told people who are bitten by a rattlesnake to ‘apply ice or heat to the wound,’ which would do about as much to save your life as crossing your fingers and hoping for the best.
          • For example, in response to a search query asking why cheese isn’t sticking to a pizza, Google’s AI suggested that you should add ‘1/8 cup of non-toxic glue to the sauce to give it more tackiness.’ (X users later discovered the AI was taking this suggestion from an 11-year-old Reddit post by a user called ‘fucksmith.’)
          • One of the big worries with the rise of these AI CliffsNotes products is how much they tend to get wrong. You can easily see how AI summarizations, without human intervention, can provide not just incorrect information, but sometimes dangerously incorrect results.
          • The richness of human knowledge and depth of understanding are reduced to bite-size, and sometimes dangerously inaccurate, summaries for our little brains to consume on our tiny devices.
        • Fallacies (85%)
          The author uses an informal fallacy by appealing to a hypothetical situation and extrapolating it as a certainty ('This means overviews that violate content policies...'). The author also makes a dichotomous depiction of AI summaries being the only alternative to 'real journalism' without acknowledging the potential benefits or nuances of AI summaries. Lastly, there are instances where the author uses inflammatory rhetoric ('fuck them, they deserve to die') and deceptive language ('most quote-unquote news sites') which can be misleading.
          • This means overviews that violate content policies...
          • most quote-unquote news sites have already alienated readers with their obsessions with trying to create content in response to whatever Twitter is upset about that day, and so the few places that still do real journalism can keep trying to do real journalism and hope that they’ll get enough clicks to keep the lights on. For everyone else, the people who take a tweet and make an article out of it, fuck them, they deserve to die.
          • most quote-unquote news sites
        • Bias (95%)
          The author expresses concern about the impact of AI summaries on the news industry and jobs in journalism. He provides examples of inaccurate information generated by AI summaries and discusses potential consequences for public discourse and misinformation. The author also mentions that some publishers have signed licensing deals with OpenAI, raising questions about their future role in creating content for these AI models.
          • Another investor I spoke with likened the situation to a scene in Tom Stoppard’s Arcadia, in which one character remarks that if someone stirs jam into their porridge by swirling it in one direction, they can’t reconstitute the jam by then stirring the opposite way. ‘The same is going to be true for all of these summarizing products,’ the investor continues. ‘Even if you tell them you don’t want them to make your articles shorter, it’s not like you can un-stir your content out of them.’
            • If the economics of the news industry continue to deteriorate, it may be too late to prevent AI from becoming the primary gatekeeper of information, with all the risks that entails.
            • Site Conflicts Of Interest (100%)
              None Found At Time Of Publication
            • Author Conflicts Of Interest (100%)
              None Found At Time Of Publication

            95%

            • Unique Points
              • Google’s AI Overviews appear less than 15% of the time based on new analysis.
              • AI Overviews at one time appeared on 84% of queries.
              • The drop in AI Overviews occurred around mid-April and continued into May.
              • Google announced the launch of AI Overviews in the US following numerous incorrect and dangerous answers.
            • Accuracy
              • Google's AI Overviews appear less than 15% of the time based on new analysis.
              • Google significantly reduced the appearance of AI Overviews after launch due to performance issues.
            • Deception (85%)
              The article provides data from BrightEdge about the decrease in visibility of Google's AI Overviews. The author also shares his own analysis and interpretation of the data. However, there are instances where the author makes editorializing statements that could potentially manipulate emotions or sensationalize the information. For example, he mentions 'numerous examples of incorrect and dangerous AI-generated answers' without providing any specific examples or context. This statement could be perceived as alarming to readers who may not be aware of the potential risks associated with AI in search. Additionally, there is selective reporting of data, as the author focuses on the decrease in visibility of AI Overviews while glossing over other findings such as their increased likelihood to appear for certain types of queries and industries. The article also contains some emotional manipulation through phrases like 'incorrect and dangerous AI-generated answers' and 'exponentially better over time'.
              • It is inevitable that the relationship between AI and search will accelerate. We must acknowledge that it is getting some things wrong at the moment but be aware that it is fine-tuning several things – search quality, the flow of traffic in its ecosystem, and monetization (ads). It will get exponentially better over time.
              • The launch of AI Overviews was followed by numerous examples of incorrect and dangerous AI-generated answers, such as suggesting people drink urine to eat rocks.
            • Fallacies (100%)
              None Found At Time Of Publication
            • Bias (100%)
              None Found At Time Of Publication
            • Site Conflicts Of Interest (100%)
              None Found At Time Of Publication
            • Author Conflicts Of Interest (100%)
              None Found At Time Of Publication