Some of the nationās top artificial intelligence labs have insufficient security measures to protect against espionage, leaving potentially dangerous AI models exposed to theft, according to U.S. government-backed researchers.
Gladstone AI, a firm advising federal agencies on AI issues, conducted a sweeping probe into the security practices of leading AI outfits, including OpenAI, Google DeepMind, and Anthropic. The firm discovered that security measures were often lacking and that cavalier attitudes about safety were prevalent among AI professionals.
Jeremie Harris, CEO of Gladstone AI, highlighted that security practices in these labs would be highly concerning to professionals if observed. An example provided was AI researchers working on powerful models in public places like Starbucks, without proper supervision, posing a significant security risk.
The investigation, conducted with the State Department, revealed minimal security measures and a lack of awareness about the threat of foreign espionage. Edouard Harris, Gladstone AIās tech chief, shared an anecdote where a security official dismissed concerns about Chinese tech theft, stating no similar models had emerged in China, which the researchers found perplexing.
The State Department acknowledged the ongoing efforts to understand AI research and mitigate associated risks. While Gladstone AI’s findings are part of the broader assessment, they do not explicitly represent the U.S. government’s views.
Some AI labs, like Google DeepMind, have acknowledged security concerns. Google DeepMind has reconsidered how to publish and share its work due to fears of Chinese exploitation. The company stated it takes security seriously and follows AI principles to ensure responsible development.
A company spokesperson said, āOur mission is to develop AI responsibly to benefit humanity ā and safety has always been a core element of our work,ā the company said in a statement late last week. āWe will continue to follow our AI principles and share our research best practices with others in the industry, as we advance our frontier AI models.ā
However, the situation is worse in smaller AI labs. Edouard Harris noted that the security measures in these labs are significantly lower than in major companies like Google and Microsoft. He emphasized that because of this, the U.S. is losing its AI leadership to espionage, with American developments routinely stolen.