Campaigns used OpenAI's technology to generate social media posts, translate articles, write headlines, and debug programs
Goal was to win support for political campaigns or sway public opinion in geopolitical conflicts
None of the efforts gained significant traction but other groups may still be using OpenAI's tools without knowledge
OpenAI disrupted 5 influence campaigns using its technology for deception
State actors and private companies from Russia, China, Iran, and Israel involved
OpenAI, a leading company in generative artificial intelligence (AI), recently released a report detailing the discovery and disruption of five online campaigns that utilized its technology for deceptive manipulation of public opinion worldwide. The influence operations were carried out by state actors and private companies from Russia, China, Iran, and Israel. This marks the first time a major AI company has revealed how its specific tools were used for such purposes.
The campaigns primarily used OpenAI's technology to generate social media posts, translate and edit articles, write headlines, and debug computer programs. Their goal was to win support for political campaigns or sway public opinion in geopolitical conflicts. However, none of these efforts gained significant traction.
Ben Nimmo, a principal investigator on OpenAI's intelligence and investigations team, stated that the case studies provided in the report illustrate examples from some of the most widely reported and longest-running influence campaigns currently active. Despite their limited success, it is possible that other groups may still be using OpenAI's tools without the company's knowledge.
The rise of generative AI has raised concerns about its potential contribution to online disinformation, particularly during a year when major elections are taking place across the globe. OpenAI aimed to show the realities of how this technology is changing online deception by revealing these findings.
OpenAI found that groups from Russia, China, Iran and Israel used its technology to try to influence political discourse around the world.
None of these groups managed to get much traction; the social media accounts associated with them reached few users and had just a handful of followers.
Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, said it’s possible that other groups may still be using OpenAI’s tools without the company’s knowledge.
Accuracy
State actors and private companies from Russia, China, Iran, and Israel ran these influence campaigns.
Deception
(30%)
The article contains selective reporting as it only reports on groups using OpenAI's technology for propaganda campaigns in Russia, China, Iran, Israel and an Israeli political campaign firm. It does not mention any groups from other countries that may have used the technology for similar purposes. This is a form of deception by omission.
Bad Grammar, the previously unknown group, used OpenAI tech to help make a program that could automatically post on the messaging app Telegram. Bad Grammar then used OpenAI tech to generate posts and comments in Russian and English arguing that the United States should not support Ukraine,
None of these groups managed to get much traction; the social media accounts associated with them reached few users and had just a handful of followers,
Fallacies
(90%)
The article contains a few informal fallacies and an example of a dichotomous depiction. There are no formal logical fallacies detected in the author's statements.
. . . propagandists who've been active for years on social media are using AI tech to boost their campaigns.
Bad Grammar, the previously unknown group, used OpenAI tech to help make a program that could automatically post on the messaging app Telegram. Bad Grammar then used OpenAI tech to generate posts and comments in Russian and English arguing that the United States should not support Ukraine.
An Iranian group known as the International Union of Virtual Media also used OpenAI's tech to create articles that it published on its site.
Bias
(90%)
The author uses language that depicts the Russian, Chinese, Iranian and Israeli groups as propagandists who are trying to influence political discourse around the world. This is an example of bias as it implies that these groups have a negative intent.
ChatGPT maker OpenAI said Thursday that it caught groups from Russia, China, Iran and Israel using its technology to try to influence political discourse around the world
None of these groups managed to get much traction; the social media accounts associated with them reached few users and had just a handful of followers, but they are still a concern as they have been active for years on social media and are using AI tech to boost their campaigns.
OpenAI identified and disrupted five online campaigns that used its generative artificial intelligence technologies for deceptive manipulation of public opinion worldwide.
State actors and private companies from Russia, China, Iran, and Israel ran these influence campaigns.
OpenAI’s report is the first time a major AI company has revealed how its specific tools were used for online deception.
Accuracy
State actors and private companies from Russia, China, Iran, and Israel ran influence campaigns.
The operations used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines, and debug computer programs.
Deception
(50%)
The article is deceptive in the way it attributes actions to OpenAI without clearly differentiating between the company's technology being used versus its direct actions or decisions. It also implies that OpenAI has a responsibility to prevent these uses of its technology, which may not be entirely fair given that it is a tool provider rather than an intelligence operation.
OpenAI said on Thursday that it had identified and disrupted five online campaigns that used its generative artificial intelligence technologies to deceptively manipulate public opinion around the world and influence geopolitics.