GPT-4 Autonomously Hacks Zero-Day Security Flaws with 53% Success Rate – Cornell Study

Share post:

Researchers have successfully used GPT-4 to autonomously hack more than half of their test websites using zero-day exploits, marking a significant milestone in AI capabilities and cybersecurity risks.

A few months ago, a research team demonstrated GPT-4’s ability to autonomously exploit one-day vulnerabilities—security flaws that are known but have not yet been patched. Given the Common Vulnerabilities and Exposures (CVE) list, GPT-4 could exploit 87% of critical-severity CVEs on its own.

This week, the same researchers published a follow-up study showing that GPT-4 can now exploit zero-day vulnerabilities—previously unknown security flaws—with a 53% success rate. The team used a method called Hierarchical Planning with Task-Specific Agents (HPTSA), which involves a “planning agent” overseeing the process and deploying multiple “subagents” for specific tasks. This hierarchical approach mimics a project management system, where the planning agent acts like a boss, coordinating subagents to handle specific tasks.

When benchmarked against 15 real-world web-focused vulnerabilities, HPTSA proved 550% more efficient than a single LLM in exploiting vulnerabilities, successfully hacking 8 out of 15 zero-day vulnerabilities. In contrast, a solo LLM effort only managed to hack 3 out of the 15 vulnerabilities.

This development raises significant cybersecurity concerns, as the ability to autonomously exploit zero-day vulnerabilities could be used maliciously. Daniel Kang, one of the researchers, emphasized that while GPT-4 in chatbot mode cannot understand or exploit vulnerabilities, the capabilities demonstrated in this study highlight the potential risks.

In practical terms, the method involves the planning agent launching subagents to tackle different parts of the task, reducing the workload on any single agent. This technique mirrors how Cognition Labs uses its Devin AI for software development, planning out jobs and spawning specialist “employees” as needed.

Source: Cornell University 

SUBSCRIBE NOW

Related articles

Alphabet Shares Plunge as Apple Eyes AI Search Alternatives for Safari

Alphabet Inc., Google's parent company, experienced a significant stock decline of over 9% on Wednesday following revelations that...

Generative AI Tops IT Spending in Canada, but Talent Gap Slows Adoption

Nearly nine out of ten Canadian organizations have adopted generative AI tools, making it the top IT spending...

Sleeper Supply Chain Attack Activates After 6 Years

A coordinated supply chain attack has compromised between 500 and 1,000 e-commerce websites by exploiting vulnerabilities in 21...

Russian-Controlled Open Source Tool Raises Alarms Over U.S. Cybersecurity

A widely used open-source Go library, easyjson, used in healthcare, finance and even defence has come under scrutiny...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways