ChatGPT bans multiple accounts linked to Iranian operation creating false news reports

ChatGPT Bans Multiple Accounts Implicated in Iranian Disinformation Campaign

OpenAI, the company behind ChatGPT, has banned multiple accounts that were part of an Iranian influence operation aimed at spreading disinformation. The accounts were detected as part of an investigation into a covert campaign codenamed Storm-2035, which targeted a range of topics, including the U.S. presidential election, the conflict in Gaza, and political issues in Venezuela, Latin America, and Scotland.

The accounts generated misleading content, including news articles and social media posts. However, the campaign was unsuccessful in reaching a significant audience. Most of the content received minimal engagement, with limited shares, likes, or comments. The campaign was rated as a Category 2 on The Breakout Scale, which measures the impact of influence operations.

OpenAI condemned the attempt to manipulate public opinion and influence political outcomes while concealing the true identity of the actors involved. The company’s AI technology was employed to detect and understand the abuse, leveraging threat intelligence sharing with relevant stakeholders.

OpenAI has expressed its commitment to mitigating such abuse at scale and promoting best practices. By leveraging partnerships and the power of generative AI, the company aims to combat foreign influence efforts and maintain information integrity.