Former OpenAI employee alleges plan for AGI bidding war

Share post:

In a recent interview, former OpenAI safety researcher Leopold Aschenbrenner made startling claims about his ex-employer’s strategy regarding artificial general intelligence (AGI).

Aschenbrenner claimed that he had been fired for raising concerns about security, despite OpenAI’s assertion that it did not penalize employees for speaking out.

Now, speaking with tech podcaster Dwarkesh Patel, Aschenbrenner claimed that he believed OpenAI had once considered initiating a global bidding war for AGI among the United States, China, and Russia.

Aschenbrenner recounted hearing “from multiple people” within the company about a plan where OpenAI leadership intended to fund and sell AGI by pitting these governments against each other. The idea was to create a competitive environment where nations would outbid each other for access to AGI technology. This plan, he noted, included the possibility of selling AGI to China and Russia, which he found “surprising” and concerning.

“There’s also something that feels eerily familiar about starting this bidding war and then playing them off each other, saying, ‘well, if you don’t do this, China will do it,'” Aschenbrenner remarked during the interview.

The conversation took a personal turn when Aschenbrenner explained why he was fired from OpenAI earlier this year. According to him, the dismissal followed his circulation of a memo warning about the Chinese Communist Party potentially stealing “key algorithmic secrets.” Human resources deemed the memo “racist” and “unconstructive,” leading to concerns about his loyalty to the company.

Aschenbrenner was ultimately fired for leaking information after OpenAI examined his computer and found documents shared during a brainstorming session on “preparedness, safety, and security measures” with external researchers. The documents included a projection about AGI development by 2027 to 2028, which HR considered confidential.

OpenAI has expressed their commitment to building safe AGI but disagreed with Aschenbrenner’s characterization of the company’s actions. OpenAI CEO Sam Altman has publicly discussed similar timelines, leading Aschenbrenner to believe that the information he shared was not sensitive.

The allegations raise significant questions about the ethical considerations and geopolitical implications of AGI development, highlighting the need for transparency and responsible handling of advanced AI technologies.

SUBSCRIBE NOW

Related articles

Microsoft Ends Support for Office 365 Apps on Windows 10: Hashtag Trending for Friday, January 17, 2025

Microsoft announces they won’t support  Office 365 on Windows 10, D-Wave achieves a quantum computing milestone, TikTok prepares...

Hackers Mount High Speed Microsoft 365 Attack: Cyber Security Today – January 17, 2025

Hackers exploit a high-speed Go library to target Microsoft 365 accounts worldwide, North Korea’s Lazarus group lures developers...

North Korean Job Scam Targeting IT Job Seekers

North Korea’s Lazarus advanced persistent threat (APT) group has launched a sophisticated campaign, “Operation 99,” targeting freelance software...

Hackers Exploit FastHTTP in High-Speed Microsoft 365 Attacks

Threat actors are employing the FastHTTP Go library to launch high-speed brute-force password attacks on Microsoft 365 accounts...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways