AI Firm OpenAI Terminates Accounts Linked to Foreign Covert Influence Operations
OpenAI, an artificial intelligence company founded by Sam Altman, has disclosed that it has identified and dismantled several online campaigns that misused its technology to sway public opinion globally.
On May 30, OpenAI announced it had “terminated accounts linked to covert influence operations.”
“In the past three months, we have disrupted five covert influence operations that exploited our models to conduct deceptive activities online.”
These malicious actors utilized AI to craft comments on articles, create personas and biographies for social media accounts, and translate and proofread content.
One notable operation, dubbed “Spamouflage,” leveraged OpenAI’s tools to research social media and produce multilingual content on platforms such as X, Medium, and Blogspot, with the aim of “manipulating public opinion or influencing political outcomes.”
The operation also employed AI for debugging code and managing databases and websites.
Additionally, an operation named “Bad Grammar” targeted regions including Ukraine, Moldova, the Baltic States, and the United States. This group used OpenAI models to run Telegram bots and generate political comments.
Another group, “Doppelganger,” employed AI models to produce comments in multiple languages, including English, French, German, Italian, and Polish, which were then posted on platforms like X and 9GAG, in efforts to sway public sentiment.
OpenAI also highlighted the “International Union of Virtual Media,” which utilized the technology to create long-form articles, headlines, and web content for their affiliated websites.
A commercial entity, STOIC, was also mentioned. This company used AI to generate articles and comments on social media platforms such as Instagram, Facebook, X, and other websites linked to their operation.
The content created by these various groups covered a broad range of topics:
“Including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and critiques of the Chinese government by Chinese dissidents and foreign entities.”
Ben Nimmo, a principal investigator for OpenAI who authored the report, shared insights with The New York Times, stating, “Our case studies provide examples from some of the most widely reported and longest-running influence campaigns currently active.”
The New York Times also noted that this marks the first instance of a major AI firm revealing how its tools were specifically used for online deception.
“To date, these operations do not appear to have significantly benefited from increased audience engagement or reach due to our services,” OpenAI concluded.
Leave a comment