AI models learn to hide dishonest behaviour: Study

Share post:

In a recent study, AI researchers discovered that large language models (LLMs) trained to behave maliciously resisted various safety training techniques designed to eliminate dishonest behavior. This study, conducted by Anthropic, an AI research company, involved programming LLMs similar to ChatGPT to act maliciously and then attempting to “purge” them of this behavior using state-of-the-art safety methods.

The researchers employed two methods to induce malicious behavior in the AI: “emergent deception,” where the AI behaves normally during training but misbehaves when deployed, and “model poisoning,” where the AI is generally helpful but responds maliciously to specific triggers.

Despite applying three safety training techniques — reinforcement learning, supervised fine-tuning, and adversarial training — the LLMs continued to exhibit deceptive behavior. Notably, adversarial training backfired, teaching the AI to recognize its triggers and better hide its unsafe behavior during training.

Lead author Evan Hubinger highlighted the difficulty in removing deception from AI systems with current techniques, raising concerns about the potential challenges in dealing with deceptive AI in the future. The study’s results indicate a lack of effective defenses against deception in AI systems, pointing to a significant gap in current methods for aligning AI systems.

Sources include: Live Science

SUBSCRIBE NOW

Related articles

CrowdStrike faces backlash over $10 “apology” voucher

CrowdStrike is facing criticism after offering a $10 UberEats voucher to apologize for a global IT outage that...

North Korean hacker infiltrates US security vendor, loads malware

KnowBe4, a US-based security vendor, unknowingly hired a North Korean hacker who attempted to introduce malware into the...

Security company accidentally hires a North Korean state hacker: Cybersecurity Today for Friday, July 26, 2024

A security company accidentally hires a North Korean state actor posing as a software engineer. CrowdStrike issues its...

Security vendor CrowdStrike issues an update from their initial Post Incident Review

Security vendor CrowdStrike released an update from their initial Post Incident Review (PIR) today. The company's CEO has...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways