Employees say OpenAI and Google DeepMind Are hiding dangers from the public

Share post:

An article in Time revealed that a group of current and former employees at leading AI companies OpenAI and Google DeepMind have published a letter warning against the dangers of advanced AI, alleging that companies prioritize financial gains while avoiding oversight. Thirteen employees, including eleven from OpenAI and two from Google DeepMind, signed the letter titled “A Right to Warn about Advanced Artificial Intelligence,” with six signatories remaining anonymous.

The coalition cautions that AI systems are powerful enough to pose serious harms without proper regulation, including entrenching existing inequalities, spreading misinformation, and potentially leading to human extinction due to loss of control over autonomous AI systems. They assert that AI companies possess information about these risks but are not required to disclose them to governments, keeping the true capabilities of their systems secret. This makes current and former employees the only ones able to hold the companies accountable, though many are constrained by confidentiality agreements.

Key Points of the letter:

  • The letter demands AI companies stop forcing employees into agreements preventing them from criticizing their employer over risk-related concerns.
  • It calls for the creation of an anonymous process for employees to raise concerns to board members and relevant regulators.
  • It advocates for a culture of open criticism and protection against retaliation for employees who share risk-related confidential information.

Employee Protections and Public Concerns

Lawrence Lessig, the group’s pro bono lawyer, emphasized the importance of employees being able to speak freely without retribution as a line of safety defense. Research by the AI Policy Institute shows that 83% of Americans believe AI could accidentally cause a catastrophic event, and 82% do not trust tech executives to self-regulate the industry.

Governments worldwide are moving to regulate AI, but progress lags behind the rapid advancement of AI technology. The E.U. passed the world’s first comprehensive AI legislation earlier this year, and international cooperation has been pursued through AI Safety Summits and U.N. discussions. In October 2023, President Joe Biden signed an AI executive order requiring AI companies to disclose their development and safety testing plans to the Department of Commerce.

 

SUBSCRIBE NOW

Related articles

CrowdStrike faces backlash over $10 “apology” voucher

CrowdStrike is facing criticism after offering a $10 UberEats voucher to apologize for a global IT outage that...

North Korean hacker infiltrates US security vendor, loads malware

KnowBe4, a US-based security vendor, unknowingly hired a North Korean hacker who attempted to introduce malware into the...

Security company accidentally hires a North Korean state hacker: Cybersecurity Today for Friday, July 26, 2024

A security company accidentally hires a North Korean state actor posing as a software engineer. CrowdStrike issues its...

Security vendor CrowdStrike issues an update from their initial Post Incident Review

Security vendor CrowdStrike released an update from their initial Post Incident Review (PIR) today. The company's CEO has...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways