A security researcher, Aaron Mulgrew has designed a highly sophisticated virus using ChatGPT.
Initially, the program was meant to prevent harmful usage by incorporating safeguards that prohibit the tool from writing code for the production of dangerous software. The researcher, however, was able to circumvent these precautions by instructing ChatGPT to generate malware function by function using simple commands.
This particular malware is a sophisticated data-stealing application that may go unnoticed on computers. It is the sort of zero-day assault used by nation-states in sophisticated attacks. The researcher used ChatGPT to accomplish this in a matter of hours, whereas it would take a team of hackers many weeks to construct such malware.
To escape detection, the virus enters a computer via a screen saver program and auto-executes after a brief pause. The virus then scans the target system for photos, PDFs, and Word documents, dividing them into smaller bits and disguising the data in the images using steganography. Finally, the photos containing the data fragments are transferred to a Google Drive folder, which likewise prevents discovery.
In a VirusTotal test, just five out of 69 antivirus software recognized the original version of the ChatGPT malware. However, the researcher was able to remove them all in a later edition. Only three antivirus programs detected the final “commercial” version, which operated from penetration through exfiltration.
The findings were obtained without the use of any code and just via the use of ChatGPT prompts. A team of five to ten malware developers, according to Mulgrew, would need many weeks to create an analogous assault without AI-based Chatbot support.
While Mulgrew’s malware is not expected to be released, the incident raises concerns about the potential misuse of ChatGPT by cybercriminals to create advanced malware attacks. This could result in significant damage to individuals, businesses, and even nation-states.
The sources for this piece include an article in BGR.