Google has revealed a plan known as Secure AI framework to help organizations protect their AI systems from cyber threats by preventing hackers from manipulating AI models or stealing data.
Google is concerned that cybersecurity and data privacy are often overlooked when adopting new technologies like social media, and they don’t want the same mistakes to happen with AI. They emphasize the need to establish strong security measures from the beginning. Phil Venables, Chief Information Security Officer at Google Cloud, said that basic security elements can manage many AI risks, and it’s important not to neglect the fundamentals.
Google’s Secure AI framework proposes six key ideas for organizations to implement: extending existing security controls to cover AI systems, utilizing AI-specific threat intelligence research, employing automation for cyber defenses, conducting regular security reviews, performing penetration tests, and assembling a team knowledgeable in AI-related risks.
Google is actively collaborating with customers and governments to encourage the adoption of these principles. Additionally, the company intends to broaden its bug bounty program to encompass the identification of security vulnerabilities pertaining to AI safety and security while gathering feedback on the framework from industry partners and government entities.
The sources for this piece include an article in Axios.