Google's New AI-Generated Summaries Spark Concerns Among Publishers: Potential Traffic Loss and Original Content Cannibalization

San Francisco, California, California, USA United States of America
Google's CEO Sundar Pichai announced the new feature last year
Google's new AI-generated summaries are causing concerns among publishers
Publishers could prevent Google from sharing snippets but this would result in less attractive links
Publishers worry about potential traffic loss and original content cannibalization
Google's New AI-Generated Summaries Spark Concerns Among Publishers: Potential Traffic Loss and Original Content Cannibalization

Google's New AI Search Feature Sparks Concerns Among Publishers

Google's recent overhaul of its search engine, which includes the introduction of A.I.-generated summaries, has left publishers worried about the impact on their business models.

According to reports from The New York Times and The Washington Post, Google's new feature compiles content from news sites and blogs on a topic being searched and generates summaries for users. Publishers are concerned that these summaries will reduce traffic to their sites, as users may not need to click through to the original articles.

Frank Pine, executive editor of Media News Group and Tribune Publishing, expressed his concerns about the feature in an interview with The New York Times. He stated that it could potentially choke off original content creators and lead to further cannibalization of their publications.

Google's CEO Sundar Pichai announced the new feature last year, stating that it would provide users with more comprehensive search results. However, publishers argue that they want their sites listed in Google's search results but are hesitant to allow the company to use their content for summaries due to potential traffic loss.

Publishers could try to protect their content by forbidding Google's web crawler from sharing any snippets from their sites. However, this would result in links showing up without descriptions, making them less attractive to users.

Google is not the only tech company facing challenges with A.I.-generated content. The Washington Post reported that Google's rival Microsoft Bing also faces similar issues with its own AI-powered search feature.

Despite these concerns, some experts argue that publishers need to adapt to the changing landscape of online media and find new ways to monetize their content beyond traffic.

Google has acknowledged the concerns raised by publishers and is reportedly taking steps to address them. However, it remains to be seen how this will impact the publishing industry in the long term.

Sources: The New York Times: https://www.nytimes.com/2024/06/01/technology/google-ai-search-publishers.html The Washington Post: https://www.washingtonpost.com/politics/2024/05/29/



Confidence

91%

Doubts
  • Are there any ways for publishers to benefit from Google's new feature?
  • Is the potential traffic loss significant enough to cause major financial damage for publishers?

Sources

93%

  • Unique Points
    • Google overhauled its search engine and introduced A.I.-generated summaries that compile content from news sites and blogs on the topic being searched.
    • Publishers are worried that these summaries pose a big danger to their brittle business model by sharply reducing the amount of traffic to their sites from Google.
  • Accuracy
    • Google made over a dozen technical improvements to its artificial intelligence systems after erroneous information was found in AI-generated search summaries
    • Google did extensive testing before launching AI Overviews but still encountered errors from contractionary online sources and user-generated content.
  • Deception (100%)
    None Found At Time Of Publication
  • Fallacies (95%)
    The article contains an appeal to authority fallacy when the newspaper executive is quoted as saying 'A new A.I.-generated feature in Google search results “is greatly detrimental to everyone apart from Google,”'. This statement is an opinion and does not provide any logical reasoning or evidence to support the claim.
    • A new A.I.-generated feature in Google search results “is greatly detrimental to everyone apart from Google,” a newspaper executive said.
  • Bias (100%)
    None Found At Time Of Publication
  • Site Conflicts Of Interest (100%)
    None Found At Time Of Publication
  • Author Conflicts Of Interest (100%)
    None Found At Time Of Publication

76%

  • Unique Points
    • Google made over a dozen technical improvements to its artificial intelligence systems after erroneous information was found in AI-generated search summaries
    • Google acknowledged that some inaccurate or unhelpful AI Overviews appeared and made immediate fixes to prevent a repeat of certain errors
    • Some of the false or harmful answers were dangerous or harmful falsehoods
  • Accuracy
    • Google updated its systems to better detect nonsensical queries and limit the use of user-generated content that could offer misleading advice
  • Deception (30%)
    The article contains several examples of selective reporting and sensationalism. The author focuses on the erroneous information generated by Google's AI-generated summaries while downplaying the fact that many of these errors were later corrected. The author also emphasizes dangerous or harmful falsehoods, such as the incorrect statement about Barack Obama being the only Muslim president of the United States, without mentioning that Google quickly fixed this issue. Furthermore, some examples provided by the author are not even from Google's AI summaries but rather from user-generated content or satirical comments on Reddit. These misrepresentations contribute to a sensationalist narrative about Google's AI and its potential to spread misinformation.
    • Google's AI overview last week pulled from a satirical Reddit comment to suggest using glue to get cheese to stick to pizza.
    • The author emphasizes dangerous or harmful falsehoods, such as the incorrect statement about Barack Obama being the only Muslim president of the United States.
    • The Associated Press last week asked Google about which wild mushrooms to eat, and it responded with a lengthy AI-generated summary that was mostly technically correct, but 'a lot of information is missing that could have the potential to be sickening or even fatal,'
  • Fallacies (80%)
    The author makes an appeal to authority by quoting experts in the field and acknowledging their opinions. However, there are instances of incorrect information being provided by Google's AI-generated summaries which can be considered a form of misinformation. The author also mentions that some false information was shared on social media, but it is not clear if these were actual examples or fakes. Therefore, the score is 80.
    • ]The United States has had one Muslim president, Barack Hussein Obama.[/
  • Bias (85%)
    Matt O'Brien demonstrates ideological bias in this article by presenting Google's AI-generated summaries as generally accurate and extensively tested, while highlighting the errors that have been found. The author focuses on the negative aspects of these errors and their potential consequences without providing a balanced view of the issue. Additionally, O'Brien quotes an expert who criticizes Google's AI feature, further emphasizing its flaws.
    • . . . Google unleashed a makeover of its search engine in mid-May that frequently provides AI-generated summaries on top of search results. Soon after, social media users began sharing screenshots of its most outlandish answers.
      • In another widely shared example, an AI researcher asked Google how many Muslims have been president of the United States, and it responded confidently with a long-debunked conspiracy theory: 'The United States has had one Muslim president, Barack Hussein Obama.'
        • . . . Liz Reid, the head of Google's search business, acknowledged in a blog post Friday that 'some odd, inaccurate or unhelpful AI Overviews certainly did show up.'
        • Site Conflicts Of Interest (100%)
          None Found At Time Of Publication
        • Author Conflicts Of Interest (100%)
          None Found At Time Of Publication

        76%

        • Unique Points
          • Google’s AI Overviews feature uses a large language model (LLM) to generate written answers to some search queries by summarizing information found online.
          • Google did extensive testing before launching AI Overviews but still encountered errors from contractionary online sources and user-generated content.
          • You.com, an AI-centric search engine, developed tricks to keep LLMs from misbehaving when used for search and uses a custom-built web index designed to help LLMs steer clear of incorrect information.
          • Google’s generative AI upgrade to its most widely used and lucrative product is part of a tech-industry-wide reboot inspired by OpenAI’s release of the chatbot ChatGPT in November 2022.
        • Accuracy
          • Google’s AI Overviews feature had advised people to eat rocks and put glue on pizza
          • Google did extensive testing before launching AI Overviews but still encountered errors from contradictory online sources and user-generated content
          • Some experts feel that Google rushed its AI upgrade and should have anticipated some people would intentionally try to trip up the system
        • Deception (30%)
          The article makes editorializing statements and uses sensationalism to grab the reader's attention. The author states that 'Google's AI Overviews feature draws on Gemini, a large language model like the one behind OpenAI’s ChatGPT, to generate written answers to some search queries by summarizing information found online.' This is an editorializing statement as it implies that there is something inherently dangerous or problematic about this technology. The author also states that 'Google’s AI Overviews feature had a rocky start last week when it advised people to eat rocks and put glue on pizza.' This is sensationalism as it focuses on the most extreme and attention-grabbing examples of errors made by the AI, rather than providing a balanced perspective. The author also states that 'Google’s head of search Liz Reid said in the company’s blog post late Thursday that it did extensive testing ahead of launching AI Overviews.' This is an example of selective reporting as it only reports information that supports the author's position, while ignoring any potential counter-arguments or mitigating factors.
          • Google’s AI Overviews feature had a rocky start last week when it advised people to eat rocks and put glue on pizza.
          • The episode highlights the risks of Google’s aggressive drive to commercialize generative AI
        • Fallacies (80%)
          The author makes an appeal to authority in the form of quotes from Richard Socher and Liz Reid. He also uses inflammatory rhetoric by describing Google's AI Overviews as 'broken' and 'hazardous'. However, he does not make any explicit logical fallacies in his own assertions.
          • ]Google’s head of search Liz Reid[...]did extensive testing ahead of launching AI Overviews. But she added that errors like the rock eating and glue pizza examples[...]had prompted additional changes.[
          • ]“You can get a quick snappy prototype now fairly quickly with an LLM, but to actually make it so that it doesn't tell you to eat rocks takes a lot of work,” says Richard Socher.
        • Bias (80%)
          The author expresses a critical view towards Google's AI Overviews feature and its potential for generating incorrect or misleading information. He quotes experts who share similar concerns and provide examples of errors made by the technology.
          • Google’s AI Overviews feature draws on Gemini, a large language model like the one behind OpenAI’s ChatGPT, to generate written answers to some search queries by summarizing information found online. The current AI boom is built around LLMs’ impressive fluency with text, but the software can also use that facility to put a convincing gloss on untruths or errors.
            • Google’s head of search Liz Reid said in the company’s blog post late Thursday that it did extensive testing ahead of launching AI Overviews. But she added that errors like the rock eating and glue pizza examples had prompted additional changes. They include better detection of ‘nonsensical queries’, Google says, and making the system rely less heavily on user-generated content.
              • Some experts feel that Google rushed its AI upgrade.
                • You.can get a quick snappy prototype now fairly quickly with an LLM, but to actually make it so that it doesn’t tell you to eat rocks takes a lot of work.
                • Site Conflicts Of Interest (100%)
                  None Found At Time Of Publication
                • Author Conflicts Of Interest (100%)
                  None Found At Time Of Publication

                82%

                • Unique Points
                  • Google's new AI search feature, called 'AI Overviews', has been causing issues with inaccurate and absurd answers.
                  • Google is taking measures to guard against the AI's mistakes and improve its systems
                  • Making sure the system only draws from reliable sources is a traditional search problem that can be partially addressed by adding fact-checking mechanisms
                • Accuracy
                  • Google initially downplayed the problems but acknowledged that they were removing some problematic results manually.
                  • Large language models, like those used in Google’s AI Overviews, are inherently unreliable and cannot be fully ‘fixed’
                • Deception (30%)
                  The article discusses the issues with Google's new AI search feature, which generates answers instead of linking to relevant websites. The author quotes several experts who express concerns about the reliability and accuracy of these AI-generated answers. However, the author also makes editorializing statements that imply a negative stance towards Google and its technology without providing any evidence or facts to support his opinions.
                  • Google's CEO, Sundar Pichai, has acknowledged the issue. But he said building them into a search engine can help 'ground' their answers in reality while directing users to the original source.
                  • It's a sign that the problems with artificial intelligence answers run deeper than what a simple software update can address.
                  • At best, companies using a large language model to answer questions can take measures to 'guard against its madness.' Or they can 'throw enormous amounts of cheap human labor to plaster over its most egregious lies.'
                • Fallacies (100%)
                  None Found At Time Of Publication
                • Bias (100%)
                  None Found At Time Of Publication
                • Site Conflicts Of Interest (100%)
                  None Found At Time Of Publication
                • Author Conflicts Of Interest (100%)
                  None Found At Time Of Publication