Will Oremus

Will Oremus is a technology news analysis writer based in Washington, D.C. He covers the ideas, products and power struggles shaping the digital world for The Washington Post. Prior to joining The Post in 2021, he spent eight years as Slate's senior technology writer and two years as a senior writer for OneZero at Medium. Oremus' work primarily focuses on the intersection of technology and politics, with an emphasis on artificial intelligence, privacy, and free speech. He has reported on issues related to big tech companies like Google and Microsoft, as well as emerging technologies like facial recognition software and autonomous vehicles. Educated at Stanford University (BA in Philosophy) and Columbia Journalism School (MA in Political Journalism), Oremus is known for his clear, concise writing style that effectively communicates complex topics to a wide audience.

92%

The Daily's Verdict

This author is known for its high journalistic standards. The author strives to maintain neutrality and transparency in its reporting, and avoids conflicts of interest. The author has a reputation for accuracy and rarely gets contradicted on major discrepancies in its reporting.

Bias

100%

Examples:

No current examples available.

Conflicts of Interest

100%

Examples:

No current examples available.

Contradictions

88%

Examples:

  • Google initially downplayed the problems but subsequently acknowledged that they were removing some problematic results manually.
  • Large language models, like those used in Google's AI Overviews, are inherently unreliable and cannot be fully 'fixed'

Deceptions

75%

Examples:

  • At best, companies using a large language model to answer questions can take measures to 'guard against its madness.' Or they can 'throw enormous amounts of cheap human labor to plaster over its most egregious lies.'
  • Google's CEO, Sundar Pichai, has acknowledged the issue. But he said building them into a search engine can help 'ground' their answers in reality while directing users to the original source.
  • It's a sign that the problems with artificial intelligence answers run deeper than what a simple software update can address.

Recent Articles

Google's New AI-Generated Summaries Spark Concerns Among Publishers: Potential Traffic Loss and Original Content Cannibalization

Google's New AI-Generated Summaries Spark Concerns Among Publishers: Potential Traffic Loss and Original Content Cannibalization

Broke On: Wednesday, 29 May 2024 Google's new AI-generated search summaries have publishers worried about traffic loss and potential cannibalization of their content. Despite concerns, some experts argue that publishers need to adapt to the changing landscape of online media and find new ways to monetize beyond traffic. Google is reportedly taking steps to address publisher concerns, but it remains unclear how this will impact the publishing industry in the long term.
Microsoft's 2024 Responsible AI Transparency Report: Expanding Teams and Tools, but Challenges Persist

Microsoft's 2024 Responsible AI Transparency Report: Expanding Teams and Tools, but Challenges Persist

Broke On: Wednesday, 01 May 2024 Microsoft's 2024 Responsible AI Transparency Report reveals growth in its team to over 400 people, the release of Python Risk Identification Tool (PyRIT) for generative AI, and expansion of evaluation tools. Despite challenges with chatbots and image generators, Microsoft is committed to safely deploying AI products, measuring risks throughout development cycles, and providing tools for Azure AI customers.