CDS Crypto News AI Firm OpenAI Terminates Accounts Linked to Foreign Covert Influence Operations
Crypto News

AI Firm OpenAI Terminates Accounts Linked to Foreign Covert Influence Operations

169
AI Firm Terminates Accounts Linked to Foreign Covert Influence Operations

AI Firm OpenAI Terminates Accounts Linked to Foreign Covert Influence Operations

OpenAI, an artificial intelligence company founded by Sam Altman, has disclosed that it has identified and dismantled several online campaigns that misused its technology to sway public opinion globally.

On May 30, OpenAI announced it had “terminated accounts linked to covert influence operations.”

“In the past three months, we have disrupted five covert influence operations that exploited our models to conduct deceptive activities online.”

These malicious actors utilized AI to craft comments on articles, create personas and biographies for social media accounts, and translate and proofread content.

One notable operation, dubbed “Spamouflage,” leveraged OpenAI’s tools to research social media and produce multilingual content on platforms such as X, Medium, and Blogspot, with the aim of “manipulating public opinion or influencing political outcomes.”

The operation also employed AI for debugging code and managing databases and websites.

AI Firm Terminates Accounts Linked to Foreign Covert Influence Operations

Additionally, an operation named “Bad Grammar” targeted regions including Ukraine, Moldova, the Baltic States, and the United States. This group used OpenAI models to run Telegram bots and generate political comments.

Another group, “Doppelganger,” employed AI models to produce comments in multiple languages, including English, French, German, Italian, and Polish, which were then posted on platforms like X and 9GAG, in efforts to sway public sentiment.

AI Firm Terminates Accounts Linked to Foreign Covert Influence Operations

OpenAI also highlighted the “International Union of Virtual Media,” which utilized the technology to create long-form articles, headlines, and web content for their affiliated websites.

A commercial entity, STOIC, was also mentioned. This company used AI to generate articles and comments on social media platforms such as Instagram, Facebook, X, and other websites linked to their operation.

The content created by these various groups covered a broad range of topics:

“Including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and critiques of the Chinese government by Chinese dissidents and foreign entities.”

Ben Nimmo, a principal investigator for OpenAI who authored the report, shared insights with The New York Times, stating, “Our case studies provide examples from some of the most widely reported and longest-running influence campaigns currently active.”

The New York Times also noted that this marks the first instance of a major AI firm revealing how its tools were specifically used for online deception.

“To date, these operations do not appear to have significantly benefited from increased audience engagement or reach due to our services,” OpenAI concluded.

AI Firm Terminates Accounts Linked to Foreign Covert Influence Operations

Leave a comment

Leave a Reply

Related Articles

Pump fun Criticized for Violent Memecoin Livestreams

Pump.fun faces backlash after disturbing livestreams broadcast violent threats; community demands better...

Texas Bitcoin Miners Face Registration Requirements for ERCOT Grid

Texas requires Bitcoin miners to register with ERCOT, sharing key facility details...

Bitcoin ETFs Fuel Market Rally: Record Inflows Drive BTC Price Surge

Bitcoin nears a record weekly close, approaching $100,000, amid significant market volatility,...

SEC Crypto Crackdown: Terraform Labs Settlement Leads to Record Fines

In fiscal year 2024, the SEC set a record by collecting $8.2...