Security researcher creates advanced malware with ChatGPT

Share post:

A security researcher, Aaron Mulgrew has designed a highly sophisticated virus using ChatGPT.

Initially, the program was meant to prevent harmful usage by incorporating safeguards that prohibit the tool from writing code for the production of dangerous software. The researcher, however, was able to circumvent these precautions by instructing ChatGPT to generate malware function by function using simple commands.

This particular malware is a sophisticated data-stealing application that may go unnoticed on computers. It is the sort of zero-day assault used by nation-states in sophisticated attacks. The researcher used ChatGPT to accomplish this in a matter of hours, whereas it would take a team of hackers many weeks to construct such malware.

To escape detection, the virus enters a computer via a screen saver program and auto-executes after a brief pause. The virus then scans the target system for photos, PDFs, and Word documents, dividing them into smaller bits and disguising the data in the images using steganography. Finally, the photos containing the data fragments are transferred to a Google Drive folder, which likewise prevents discovery.

In a VirusTotal test, just five out of 69 antivirus software recognized the original version of the ChatGPT malware. However, the researcher was able to remove them all in a later edition. Only three antivirus programs detected the final “commercial” version, which operated from penetration through exfiltration.

The findings were obtained without the use of any code and just via the use of ChatGPT prompts. A team of five to ten malware developers, according to Mulgrew, would need many weeks to create an analogous assault without AI-based Chatbot support.

While Mulgrew’s malware is not expected to be released, the incident raises concerns about the potential misuse of ChatGPT by cybercriminals to create advanced malware attacks. This could result in significant damage to individuals, businesses, and even nation-states.

The sources for this piece include an article in BGR.

SUBSCRIBE NOW

Related articles

Anthropic Warns: AI “Virtual Employees” Could Pose Security Risks Within a Year

Anthropic, a leading artificial intelligence company, anticipates that AI-powered virtual employees could begin operating within corporate networks as...

Hertz Data Breach Exposes Customer Information via Supply Chain Hack

Hertz has disclosed a data breach resulting from a cyberattack on its vendor, Cleo Communications, which compromised sensitive...

Google’s New Security Feature – Automatic Reboot

Google is introducing a new security feature in its latest Android update that will automatically reboot phones and...

Cybersecurity Firm Prodaft Buys Hacker Forum Accounts to Monitor Cybercriminal Activity

Swiss cybersecurity company Prodaft has initiated a program to purchase verified and aged accounts on hacking forums, aiming...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways