CDS Crypto News AI Firm OpenAI Terminates Accounts Linked to Foreign Covert Influence Operations
Crypto News

AI Firm OpenAI Terminates Accounts Linked to Foreign Covert Influence Operations

57
AI Firm Terminates Accounts Linked to Foreign Covert Influence Operations

AI Firm OpenAI Terminates Accounts Linked to Foreign Covert Influence Operations

OpenAI, an artificial intelligence company founded by Sam Altman, has disclosed that it has identified and dismantled several online campaigns that misused its technology to sway public opinion globally.

On May 30, OpenAI announced it had “terminated accounts linked to covert influence operations.”

“In the past three months, we have disrupted five covert influence operations that exploited our models to conduct deceptive activities online.”

These malicious actors utilized AI to craft comments on articles, create personas and biographies for social media accounts, and translate and proofread content.

One notable operation, dubbed “Spamouflage,” leveraged OpenAI’s tools to research social media and produce multilingual content on platforms such as X, Medium, and Blogspot, with the aim of “manipulating public opinion or influencing political outcomes.”

The operation also employed AI for debugging code and managing databases and websites.

AI Firm Terminates Accounts Linked to Foreign Covert Influence Operations

Additionally, an operation named “Bad Grammar” targeted regions including Ukraine, Moldova, the Baltic States, and the United States. This group used OpenAI models to run Telegram bots and generate political comments.

Another group, “Doppelganger,” employed AI models to produce comments in multiple languages, including English, French, German, Italian, and Polish, which were then posted on platforms like X and 9GAG, in efforts to sway public sentiment.

AI Firm Terminates Accounts Linked to Foreign Covert Influence Operations

OpenAI also highlighted the “International Union of Virtual Media,” which utilized the technology to create long-form articles, headlines, and web content for their affiliated websites.

A commercial entity, STOIC, was also mentioned. This company used AI to generate articles and comments on social media platforms such as Instagram, Facebook, X, and other websites linked to their operation.

The content created by these various groups covered a broad range of topics:

“Including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and critiques of the Chinese government by Chinese dissidents and foreign entities.”

Ben Nimmo, a principal investigator for OpenAI who authored the report, shared insights with The New York Times, stating, “Our case studies provide examples from some of the most widely reported and longest-running influence campaigns currently active.”

The New York Times also noted that this marks the first instance of a major AI firm revealing how its tools were specifically used for online deception.

“To date, these operations do not appear to have significantly benefited from increased audience engagement or reach due to our services,” OpenAI concluded.

AI Firm Terminates Accounts Linked to Foreign Covert Influence Operations

Leave a comment

Leave a Reply

Related Articles

The Rise of CeFi Attacks: Takeaways from Web3 Security Report

The Rise of CeFi Attacks: Takeaways from Web3 Security Report

Understanding the Success of Meme Coins in the Crypto Market

Understanding the Success of Meme Coins in the Crypto Market

Germany Liquidates Bitcoin, Moves $900 Million in a Mere 8 Hours

The German government escalated its Bitcoin sell-off, transferring more than $900 million...

Bitfinex Suggests Bitcoin May Have Hit a Local Bottom

Bitfinex analysts highlighted several reasons indicating that Bitcoin's downturn may be coming...