Top AI Labs Have Minimal Defense Against Espionage, Researchers Say

Share post:

Some of the nation’s top artificial intelligence labs have insufficient security measures to protect against espionage, leaving potentially dangerous AI models exposed to theft, according to U.S. government-backed researchers.

Gladstone AI, a firm advising federal agencies on AI issues, conducted a sweeping probe into the security practices of leading AI outfits, including OpenAI, Google DeepMind, and Anthropic. The firm discovered that security measures were often lacking and that cavalier attitudes about safety were prevalent among AI professionals.

Jeremie Harris, CEO of Gladstone AI, highlighted that security practices in these labs would be highly concerning to professionals if observed. An example provided was AI researchers working on powerful models in public places like Starbucks, without proper supervision, posing a significant security risk.

The investigation, conducted with the State Department, revealed minimal security measures and a lack of awareness about the threat of foreign espionage. Edouard Harris, Gladstone AI’s tech chief, shared an anecdote where a security official dismissed concerns about Chinese tech theft, stating no similar models had emerged in China, which the researchers found perplexing.

The State Department acknowledged the ongoing efforts to understand AI research and mitigate associated risks. While Gladstone AI’s findings are part of the broader assessment, they do not explicitly represent the U.S. government’s views.

Some AI labs, like Google DeepMind, have acknowledged security concerns. Google DeepMind has reconsidered how to publish and share its work due to fears of Chinese exploitation. The company stated it takes security seriously and follows AI principles to ensure responsible development.

A company spokesperson said, “Our mission is to develop AI responsibly to benefit humanity — and safety has always been a core element of our work,” the company said in a statement late last week. “We will continue to follow our AI principles and share our research best practices with others in the industry, as we advance our frontier AI models.”

However, the situation is worse in smaller AI labs. Edouard Harris noted that the security measures in these labs are significantly lower than in major companies like Google and Microsoft. He emphasized that because of this, the U.S. is losing its AI leadership to espionage, with American developments routinely stolen.

SUBSCRIBE NOW

Related articles

CrowdStrike faces backlash over $10 “apology” voucher

CrowdStrike is facing criticism after offering a $10 UberEats voucher to apologize for a global IT outage that...

North Korean hacker infiltrates US security vendor, loads malware

KnowBe4, a US-based security vendor, unknowingly hired a North Korean hacker who attempted to introduce malware into the...

Security company accidentally hires a North Korean state hacker: Cybersecurity Today for Friday, July 26, 2024

A security company accidentally hires a North Korean state actor posing as a software engineer. CrowdStrike issues its...

Security vendor CrowdStrike issues an update from their initial Post Incident Review

Security vendor CrowdStrike released an update from their initial Post Incident Review (PIR) today. The company's CEO has...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways