OpenAI recently deactivated several ChatGPT accounts linked to an Iranian disinformation operation focused on the 2024 U.S. elections. The accounts, connected to the group Storm-2035, were using the AI tool to generate and spread fake news articles and social media content on topics like the U.S. presidential race and international conflicts. While the disinformation campaign did not achieve widespread engagement, this incident highlights the growing concern over the use of AI tools to accelerate and amplify foreign influence operations.
OpenAI identified the disinformation activity shortly after Microsoft released a report detailing similar Iranian efforts, including spear-phishing attacks targeting U.S. presidential campaigns. In response, OpenAI not only deactivated the ChatGPT accounts but also traced the disinformation to social media profiles on platforms like X (formerly Twitter) and Instagram. These accounts, now inactive, were part of a broader strategy to influence public opinion by creating fake news websites and posts.
The revelation underscores the ongoing risks posed by nation-state actors using AI to disrupt democratic processes. While the immediate impact of this particular campaign was limited, experts warn that the situation could escalate as the 2024 elections approach. OpenAI has developed new tools to detect and prevent such operations, but the evolving landscape of AI-driven disinformation remains a significant challenge for both tech companies and election security efforts.