Google expands AI bug bounty program rewards 

Share post:

Google has expanded its bug bounty program to include new categories of attacks specific to AI systems. The program will reward security researchers for reporting issues such as prompt injection, training data extraction, model manipulation, adversarial perturbation attacks, and data theft targeting model-training data.

The scope of the AI bug bounty program includes new categories of designed to encourage security researchers to identify and report potential threats to the security and reliability of Google’s AI products.

Prompt attacks are one of the categories, and they involve the creation of adversarial prompts that can manipulate a model’s behavior, potentially leading to harmful or offensive content being generated. This kind of attack can pose significant risks, and rewarding researchers for identifying such vulnerabilities is a proactive step in enhancing AI security.

Training data extraction is another category. Attackers may attempt to reconstruct verbatim training examples that contain sensitive information from AI models. This poses not only privacy concerns but also the risk of uncovering and exploiting model biases. Rewarding the discovery of such vulnerabilities can help in mitigating these risks.

Model manipulation is a category that involves covertly altering a model’s behavior to trigger predefined adversarial actions. By rewarding researchers for identifying instances of model manipulation, Google aims to ensure that AI models remain robust and resistant to unauthorized tampering.

Adversarial perturbation is another area of concern, where inputs are designed to provoke highly unexpected and deterministic outputs from AI models. These types of attacks can lead to unpredictable and potentially harmful consequences. By offering rewards for the discovery of adversarial perturbations, Google reinforces the importance of addressing this vulnerability.

Model theft or exfiltration is the fifth category, where attackers may attempt to steal details about a model, such as its architecture or weights. This information can be exploited for unauthorized use or for creating similar models, potentially leading to intellectual property theft. Rewarding researchers for identifying such breaches can help protect Google’s AI assets.

Google is also providing more specific guidance on what types of reports are in scope for rewards. For example, prompt injection attacks that are invisible to victims and change the state of the victim’s account or assets are in scope, while using a product to generate violative, misleading, or factually incorrect content in your own session is out of scope.

The sources for this piece include an article in TheVerge.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

Russian threat actor expanding its target list, warns Five Eyes report

APT29 is increasingly going after cloud services in mo

Canada’s privacy watchdog investigating hack at Global Affairs

Inquiry will look into adequacy of data safeguards at the federal

Sidebar: The powerful Digital Safety Commission

A look at the powers of the proposed five-person body charged with overseeing the Online

Cyber attack on Hamilton knocks out municipal phone, email

One of Ontario’s biggest cities is in the second day of dealing with a cyber attack. Hamilton, a municipality of about 570,000 on the shore of Lake Ontario, said Sunday it had suffered a city-wide phone and email “disruption” to municipal and public library services, which included the Bus Check Info Line and the HSRNow

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways