Google's AI Overviews: Misleading Answers and Dangerous Misinformation

St. Louis, Missouri United States of America
AI Overviews may provide accurate summaries of sources that are actually wrong.
AI Overviews on search results have been causing controversy due to their tendency to produce misleading answers and dangerous misinformation.
Fandom's Idea Wiki, once considered a reliable source by Google's AI Overviews, is actually a fan fiction website.
Google's new AI feature, Google Search's AI, generates buzz for its ability to deliver helpful results for complex queries through multi-step reasoning.
Some users have reported receiving incorrect information from AI Overviews originating from troll posts on various forums.
Google's AI Overviews: Misleading Answers and Dangerous Misinformation

Google's new AI feature, Google Search's AI, has been generating buzz for its ability to deliver helpful results for complex queries through multi-step reasoning. However, the latest addition to this technology, called AI Overviews on search results, has been causing controversy due to its tendency to produce misleading answers and dangerous misinformation.

One of the main issues with AI Overviews is their inability to distinguish between jokes and facts. For instance, some users have reported receiving incorrect information about adding glue to pizza or using blinker fluid for a noisy turn signal, which originated from troll posts on various forums.

Another problem is the potential for accurate summaries of sources that are actually wrong. For example, one source claimed that one-third of Declaration of Independence signers were personally enslavers, but contradictory sources exist.

Furthermore, some previously trusted sources by Google's AI Overviews have been found to be unreliable. For instance, Fandom's Idea Wiki was once considered a reliable source for information but is actually a fan fiction website.

Google CEO Sundar Pichai has acknowledged that AI hallucination is an unresolved issue and the company continues to work on improving the technology to ensure accurate and trustworthy results.



Confidence

80%

Doubts
  • Are there any measures in place to fact-check sources used by AI Overviews before summarizing them?
  • Is it confirmed that all instances of AI Overviews producing incorrect information are due to troll posts?

Sources

83%

  • Unique Points
    • Google’s new AI feature, AI Overviews, has been causing issues with misinformation after only a week of being available.
    • Google Search’s AI has shown the ability to deliver helpful results for more complex queries through multi-step reasoning.
    • , Google is planning to use AI for trip planning and meal recommendations.
    • , Google Photos’ Magic Editor is now available on older Google Pixel devices, while the revamped Google Weather app is now widely available outside of the Pixel world.
    • , Chromecast with Google TV has received its fourth update of 2024, Google updates Snapseed for Android, and Android Find My Device trackers are starting to ship.
    • , YouTube is once again rolling out its widely hated new web redesign, and the Google app is rolling out a ‘Notifications’ tab on Android.
    • , Gemini has finally let users play YouTube Music with an extension rollout, and Google is rolling out Android 15 Beta 2.1 with a Private space fix.
    • , Google Flights now shows Southwest fares, eliminating the ‘price unavailable’ issue.
    • Daylight DC1 is an Android tablet featuring a Live Paper display that functions like a much faster e-ink display.
    • , At least one Android phone works with the Clicks keyboard for iPhone.
    • , Humane is reportedly looking to sell the company less than a month after releasing Ai Pin.
  • Accuracy
    • Google's new AI feature, AI Overviews, has been causing issues with misinformation after only a week of being available.
    • Google is planning to use AI for trip planning and meal recommendations.
  • Deception (70%)
    The article contains several instances of editorializing and opinionated statements by the author. The author expresses their personal feelings about AI in search and its potential for misinformation. While these statements are not factually incorrect, they do represent the author's biased perspective on the topic.
    • AI can parse through information, yes, but it’s not very good at knowing what’s true and what’s not.
    • These quick answers should be a separate product -- I don’t know, maybe a chatbot?
    • I still firmly believe that AI doesn’t belong in Search.
    • But using 'traditional' Google Search for planning was a labor-intensive task requiring the user to dig through multiple webpages, perform multiple searches, and gather their findings manually outside of Search.
    • But generative AI is constantly prone to hallucination or just being flat out wrong where it will confidently spit out blatantly false information as a firm fact.
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

99%

  • Unique Points
    • Google’s AI suggested mixing glue into pizza ingredients to prevent cheese from sliding off
    • Google’s AI incorrectly identified African countries that start with the letter ‘C’
    • AI cannot distinguish between jokes and facts or provide accurate answers when insufficient information is available
    • Google removed false information generated by its AI stating that former President Barack Obama is a Muslim
  • Accuracy
    No Contradictions at Time Of Publication
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (100%)
    None Found At Time Of Publication
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

82%

  • Unique Points
    • Google’s AI-generated summaries on search results have become subject to scrutiny and jokes on social media after users shared examples of the feature displaying misleading answers and dangerous misinformation.
    • Examples include the AI summary citing dubious sources such as Reddit posts written as jokes, failing to understand that articles on The Onion are not factual, plagiarizing text from blogs without removing mentions of authors’ children, and getting basic facts wrong like failing to acknowledge countries in Africa starting with K and suggesting pythons are mammals.
    • Some examples have gone viral, such as an AI summary spreading a right-wing conspiracy theory that President Barack Obama is Muslim or putting glue on pizza.
    • Google CEO Sundar Pichai acknowledged that AI hallucination is an unresolved issue.
  • Accuracy
    • Google's AI suggested mixing glue into pizza ingredients to prevent cheese from sliding off
    • Google's AI has previously suggested drinking urine to help pass a kidney stone
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (80%)
    The author provides several examples of the AI generating misleading answers, but does not commit any explicit fallacies in the text. However, some of the examples given involve plagiarism and failure to understand satire, which could be considered informal fallacies. The author also mentions that Google CEO Sundar Pichai acknowledged that AI hallucination is an unsolved problem, which could be seen as an appeal to authority if taken out of context.
    • ]Topline One of the key new features unveiled at the Google I/O developer conference last week[AI-generated summaries on search results]has become the subject of scrutiny and jokes on social media after users appeared to show the feature displaying misleading answers and, in some cases, dangerous misinformation.[/
    • Computer scientist Melanie Mitchell shared an example of the feature displaying in an answer a right-wing conspiracy theory that President Barack Obama is Muslim.
    • In other instances, the AI summary appears to be plagiarizing text from blogs and failing to remove or alter mentions of the author’s children.
  • Bias (80%)
    The author mentions several instances of the AI summary producing misleading answers and dangerous misinformation. While the author does not express any bias themselves, they do quote others who have shared examples of the AI summaries citing dubious sources or failing to understand satire. The author also mentions that some of these examples include right-wing conspiracy theories, which could be perceived as an attempt to depict one side as extreme or unreasonable.
    • Computer scientist Melanie Mitchell shared an example of the feature displaying in an answer a right-wing conspiracy theory that President Barack Obama is Muslim
      • several Google users, including journalists, have shared what appear to be multiple examples of the AI summary citing dubious sources
      • Site Conflicts Of Interest (100%)
        None Found At Time Of Publication
      • Author Conflicts Of Interest (100%)
        None Found At Time Of Publication

      85%

      • Unique Points
        • Google’s latest AI search feature has generated a litany of untruths and errors, including recommending glue as part of a pizza recipe and ingesting rocks for nutrients.
        • The incorrect answers in the feature called AI Overview have undermined trust in Google’s search engine which over two billion people rely on for authoritative information.
        • The backlash against the erroneous responses from Google’s AI search feature has caused a furor online and put more pressure on the company to safely incorporate AI into its search engine.
        • In February 2023, Google announced Bard, a chatbot to battle ChatGPT but shared incorrect information about outer space causing its market value to drop by $100 billion.
        • Google’s Gemini, the successor of Bard, was released in February and quickly identified for refusing to generate images of white people in most instances and drawing inaccurate depictions of historical figures.
      • Accuracy
        • Google's latest AI search feature has generated a litany of untruths and errors, including recommending glue as part of a pizza recipe and ingesting rocks for nutrients.
        • Google Search’s AI has shown the ability to deliver helpful results for more complex queries through multi-step reasoning.
        • Google is planning to use AI for trip planning and meal recommendations.
        • Snapdragon X Elite laptops have started shipping, with Microsoft leading the way in partnerships with other companies such as Samsung and Lenovo.
      • Deception (100%)
        None Found At Time Of Publication
      • Fallacies (85%)
        The article contains a few informal fallacies and an example of inflammatory rhetoric. It uses exaggeration in the phrase 'erroneously told users to eat glue and rocks', which is misleading as it implies that Google's AI Overview directly instructed users to consume these items, when in fact, it provided incorrect information as part of a response. Additionally, the article employs inflammatory rhetoric by describing the errors as generating 'a litany of untruths and errors', which is an overstatement. Lastly, there are instances of appeals to authority and a dichotomous depiction.
        • . . . the backlash demonstrated that Google is under more pressure to safely incorporate A.I. into its search engine.
        • In February 2023, when Google announced Bard, a chatbot to battle ChatGPT, it shared incorrect information about outer space.
        • Users quickly realized that the system refused to generate images of white people in most instances and drew inaccurate depictions of historical figures.
      • Bias (100%)
        None Found At Time Of Publication
      • Site Conflicts Of Interest (100%)
        None Found At Time Of Publication
      • Author Conflicts Of Interest (100%)
        None Found At Time Of Publication

      68%

      • Unique Points
        • Google’s AI Overviews have been providing summarized answers to some questions, but they can be incorrect or misleading.
        • Some errors come from the system not recognizing jokes as such and treating them as facts.
        • Examples include suggesting using glue on pizza or blinker fluid for a noisy turn signal. These errors originate from troll posts on various forums.
        • Google’s AI Overviews can also offer accurate summaries of sources that are actually wrong. For instance, one source stated that one-third of Declaration of Independence signers were personally enslavers, but contradictory sources exist.
        • Fandom’s Idea Wiki was once trusted by Google’s AI Overviews as a reliable source for information, but it is actually a fan fiction website.
      • Accuracy
        • Google's AI Overviews have been providing summarized answers to some questions, but they can be incorrect or misleading.
        • Google's AI Overviews can also offer accurate summaries of sources that are actually wrong. For instance, one source stated that one-third of Declaration of Independence signers were personally enslavers, but contradictory sources exist.
        • Fandom's Idea Wiki was once trusted by Google's AI Overviews as a reliable source for information, but it is actually a fan fiction website.
      • Deception (30%)
        The article provides examples of Google's AI Overview providing inaccurate information due to treating jokes as facts and bad sourcing. These instances demonstrate selective reporting and a lack of fact-checking by the author, which can mislead readers.
        • Google’s AI Overview offers an accurate summary of a non-joke source that happens to be wrong. When asking about how many Declaration of Independence signers owned slaves, for instance, Google’s AI Overview accurately summarizes a Washington University of St. Louis library page saying that one-third ‘were personally enslavers.’ But the response ignores contradictory sources like a Chicago Sun-Times article saying the real answer is closer to three-quarters.
        • An AI answer that suggested using ‘1/8 cup of non-toxic glue’ to stop cheese from sliding off pizza can be traced back to someone who was obviously trying to troll an ongoing thread.
      • Fallacies (80%)
        The author provides several examples of Google's AI Overview making inaccurate statements. In the first example, the AI Overview treats jokes as facts by suggesting using glue on pizza and blinker fluid for a turn signal that doesn't make noise. These errors come from troll posts on the internet and are integrated into the authoritative-sounding data summary presented at the top of Google search results. In another example, Google's AI Overview accurately summarizes a non-joke source but ignores contradictory sources, leading to an inaccurate answer. Lastly, the AI Overview trusts a fan fiction website as an authoritative source for information about a 2022 remake of 2001: A Space Odyssey. These errors demonstrate a lack of thorough fact-checking and proper sourcing by Google's AI Overview.
        • This bit about using 1/8 cup of non-toxic glue to stop cheese from sliding off pizza can be traced back to someone who was obviously trying to troll an ongoing thread.
        • A response recommending blinker fluid for a turn signal that doesn’t make noise can similarly be traced back to a troll on the Good Sam advice forums, which Google’s AI Overview apparently trusts as a reliable source.
        • When asking about how many Declaration of Independence signers owned slaves, for instance, Google’s AI Overview accurately summarizes a Washington University of St. Louis library page saying that one-third ‘were personally enslavers.’ But the response ignores contradictory sources like a Chicago Sun-Times article saying the real answer is closer to three-quarters.
        • A savvy web user would probably do a double-take before citing Fandom’s ‘Idea Wiki’ as a reliable source, but a careless AI Overview user might not notice where the AI got its information.
      • Bias (80%)
        The author provides examples of Google's AI Overview providing inaccurate information from sources that are not reliable. The author does not express any bias towards Google or the AI Overview, but rather points out the errors and their potential consequences. However, the author's tone can be perceived as critical towards Google and its technology.
        • An AI answer that suggested using ‘1/8 cup of non-toxic glue’ to stop cheese from sliding off pizza can be traced back to someone who was obviously trying to troll an ongoing thread.
          • A response recommending ‘blinker fluid’ for a turn signal that doesn’t make noise can similarly be traced back to a troll on the Good Sam advice forums, which Google’s AI Overview apparently trusts as a reliable source.
            • Sometimes Google’s AI Overview offers an accurate summary of a non-joke source that happens to be wrong. When asking about how many Declaration of Independence signers owned slaves, for instance, Google’s AI Overview accurately summarizes a Washington University of St. Louis library page saying that one-third ‘were personally enslavers.’ But the response ignores contradictory sources like a Chicago Sun-Times article saying the real answer is closer to three-quarters.
              • That’s the case for a response that imagined a 2022 remake of 2001: A Space Odyssey, directed by Steven Spielberg and produced by George Lucas. A savvy web user would probably do a double-take before citing Fandom’s ‘Idea Wiki’ as a reliable source, but a careless AI Overview user might not notice where the AI got its information.
              • Site Conflicts Of Interest (100%)
                None Found At Time Of Publication
              • Author Conflicts Of Interest (100%)
                None Found At Time Of Publication