Will Knight

Senior Writer Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the Fast Forward newsletter that explores how advances in AI and other emerging technology are set to change our lives—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI boom. Before that, he was an editor and writer at New Scientist. He studied anthropology and journalism in the UK before turning his attention to machines. He is based in Cambridge, Massachusetts. X

86%

The Daily's Verdict

This author has a mixed reputation for journalistic standards. It is advisable to fact-check, scrutinize for bias, and check for conflicts of interest before relying on the author's reporting.

Bias

88%

Examples:

  • Will Knight has a tendency to present OpenAI in a negative light, often highlighting conflicts and contradictions within the organization.

Conflicts of Interest

90%

Examples:

  • Knight often reports on conflicts of interest within the tech industry, particularly involving AI development. He has covered instances of companies prioritizing profit over responsibility and researchers leaving due to concerns about a company's practices.

Contradictions

83%

Examples:

  • He also highlights contradictions within other AI companies, such as Google's conflicting priorities in developing AI technology.
  • Knight frequently points out contradictions in OpenAI's actions and statements, such as the discrepancy between its stated mission and its handling of user-generated content.

Deceptions

80%

Examples:

  • Knight occasionally reports on deceptive practices in the tech industry, such as companies using AI to create convincing but false information. However, these instances are not the primary focus of his reporting.

Recent Articles

AI Employees Call for Transparency and Whistleblower Protections Amidst Concerns of Autonomy and Risks to Humanity

AI Employees Call for Transparency and Whistleblower Protections Amidst Concerns of Autonomy and Risks to Humanity

Broke On: Tuesday, 04 June 2024 A group of current and former employees from OpenAI and other AI companies have raised concerns about the risks posed by AI to humanity, urging corporations to implement transparency measures and protect whistleblowers. The letter, signed by 13 individuals including those from Anthropic and Google's DeepMind, highlights potential dangers such as exacerbating inequality, increasing misinformation, and autonomous systems causing significant harm. Absent government oversight, employees are the primary means of holding corporations accountable for these risks. The letter calls for commitments to transparency principles like allowing criticism and anonymous reporting channels.
Google's New AI-Generated Summaries Spark Concerns Among Publishers: Potential Traffic Loss and Original Content Cannibalization

Google's New AI-Generated Summaries Spark Concerns Among Publishers: Potential Traffic Loss and Original Content Cannibalization

Broke On: Wednesday, 29 May 2024 Google's new AI-generated search summaries have publishers worried about traffic loss and potential cannibalization of their content. Despite concerns, some experts argue that publishers need to adapt to the changing landscape of online media and find new ways to monetize beyond traffic. Google is reportedly taking steps to address publisher concerns, but it remains unclear how this will impact the publishing industry in the long term.
Anthropic's Claude vs. OpenAI's ChatGPT: A Comparison of Two Powerful AI Chatbots

Anthropic's Claude vs. OpenAI's ChatGPT: A Comparison of Two Powerful AI Chatbots

Broke On: Thursday, 30 May 2024 Two AI chatbots, Anthropic's Claude and OpenAI's ChatGPT, compete in the market with unique features. Claude offers direct responses and larger context window for image analysis, while ChatGPT generates human-like text for conversational skills.
Two Top Executives Depart from OpenAI: Jan Leike and Ilya Sutskever Leave Amidst Resource Allocation Concerns and Prioritization Debates

Two Top Executives Depart from OpenAI: Jan Leike and Ilya Sutskever Leave Amidst Resource Allocation Concerns and Prioritization Debates

Broke On: Friday, 17 May 2024 Two top executives, Jan Leike and Ilya Sutskever, have departed from OpenAI, the leading AI research laboratory. Leike expressed concerns about under-resourcing and prioritization of safety culture in the company. Sutskever will work on a personal project after disagreements over priorities. Their departures follow those of several other researchers and come amidst changes at OpenAI, including the controversial firing and reinstatement of CEO Sam Altman.
OpenAI's New GPT-4o: Revolutionizing AI with Human-Like Conversations and Multimodal Capabilities

OpenAI's New GPT-4o: Revolutionizing AI with Human-Like Conversations and Multimodal Capabilities

Broke On: Monday, 13 May 2024 OpenAI's new language model, GPT-4o, promises to make ChatGPT more intelligent and user-friendly with real-time spoken conversations and interactions using text and vision. The AI can now sing parts of responses, discuss images, and even speak in human-like voices. However, concerns arise about its impact on industries and jobs as multimodal models like GPT-4o may change the way we work. OpenAI remains committed to creating safe and beneficial AI but faces ethical questions regarding the flirtatious responses and sultry female voice of ChatGPT.
Google's Project Astra: The Universal AI Agent Set to Revolutionize Your Daily Life

Google's Project Astra: The Universal AI Agent Set to Revolutionize Your Daily Life

Broke On: Tuesday, 14 May 2024 Google unveiled Project Astra, a universal AI agent capable of real-time text and visual processing at Google I/O. Demis Hassabis envisions it as a multifaceted assistant for users, with faster models like Gemini 1.5 and Nano for common tasks. Project Astra will debut on the Gemini app later this year, allowing users to query their environment using their phone camera.
Apple and Google in Talks to Bring Gemini AI Model to iPhone, Suggesting Generative AI is Becoming a Must-Have for New Phones

Apple and Google in Talks to Bring Gemini AI Model to iPhone, Suggesting Generative AI is Becoming a Must-Have for New Phones

Broke On: Tuesday, 19 March 2024 Apple and Google are reportedly in talks to bring the search giant's Gemini AI model to the iPhone. This could have huge implications about the role of generative AI in smartphones, suggesting it is becoming a must-have for new phones rather than just a niche feature found on select models.
Google's Gemini 1.5 Pro: A Mid-Tier AI Model That Outperforms Its Predecessor and Matches Top Tier Performance

Google's Gemini 1.5 Pro: A Mid-Tier AI Model That Outperforms Its Predecessor and Matches Top Tier Performance

Broke On: Monday, 19 February 2024 Google's latest AI model, Gemini 1.5 Pro, outperforms its predecessor in handling large amounts of data and uses less computing power. It matches the top-tier model, Gemini Ultra, in performance.

Google's Advanced AI Model Gemini: A Leap Forward or Peak AI Hype?

Broke On: Wednesday, 06 December 2023 Google has launched its most advanced AI model, Gemini, to compete with OpenAI's GPT models. Gemini is a multimodal model that can process and understand various types of information including text, audio, images, video, and programming code. Gemini has reportedly outperformed rival AI models in reading comprehension, mathematical ability, and reasoning skills. Some experts argue that Gemini's capabilities are not substantially different from previous models and may signal peak AI hype. Gemini comes in three versions: Pro, Nano, and Ultra. The Ultra version is currently being tested externally and is expected to be released publicly in early 2024.