Employees say OpenAI and Google DeepMind Are hiding dangers from the public

Share post:

An article in Time revealed that a group of current and former employees at leading AI companies OpenAI and Google DeepMind have published a letter warning against the dangers of advanced AI, alleging that companies prioritize financial gains while avoiding oversight. Thirteen employees, including eleven from OpenAI and two from Google DeepMind, signed the letter titled “A Right to Warn about Advanced Artificial Intelligence,” with six signatories remaining anonymous.

The coalition cautions that AI systems are powerful enough to pose serious harms without proper regulation, including entrenching existing inequalities, spreading misinformation, and potentially leading to human extinction due to loss of control over autonomous AI systems. They assert that AI companies possess information about these risks but are not required to disclose them to governments, keeping the true capabilities of their systems secret. This makes current and former employees the only ones able to hold the companies accountable, though many are constrained by confidentiality agreements.

Key Points of the letter:

  • The letter demands AI companies stop forcing employees into agreements preventing them from criticizing their employer over risk-related concerns.
  • It calls for the creation of an anonymous process for employees to raise concerns to board members and relevant regulators.
  • It advocates for a culture of open criticism and protection against retaliation for employees who share risk-related confidential information.

Employee Protections and Public Concerns

Lawrence Lessig, the group’s pro bono lawyer, emphasized the importance of employees being able to speak freely without retribution as a line of safety defense. Research by the AI Policy Institute shows that 83% of Americans believe AI could accidentally cause a catastrophic event, and 82% do not trust tech executives to self-regulate the industry.

Governments worldwide are moving to regulate AI, but progress lags behind the rapid advancement of AI technology. The E.U. passed the world’s first comprehensive AI legislation earlier this year, and international cooperation has been pursued through AI Safety Summits and U.N. discussions. In October 2023, President Joe Biden signed an AI executive order requiring AI companies to disclose their development and safety testing plans to the Department of Commerce.



Related articles

How your smart TV is watching you. Hashtag Trending for Monday, June 17th, 2024

Hashtag Trending is brought you with the generous sponsorship of Zoho Canada. We thank them for making it...

London hospitals cancel over 800 operations after ransomware attack

NHS England disclosed today that a recent ransomware attack on Synnovis has led to the cancellation of hundreds...

Microsoft cancels universal Recall release in favor of Windows Insider preview

Microsoft has decided to cancel the wide release of Recall, the controversial tool for Copilot+ PCs, and instead...

OpenAI’s revenue soars with subscription-based ChatGPT and developer integrations

There are reports that OpenAI has been been experiencing impressive financial growth. The Information reported that the company's...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways