ChatGPT test shows how AI can be fooled

Share post:

There’s more evidence that ChatGPT won’t put IT security teams out of work — yet.

Researchers at Endor Labs tested ChatGPT 3.5 against 1,870 artifacts from the PyPi and NPN open-source repositories of code. It identified 34 as having malware. However, only 13 really had bad code. Five others did have obfuscated code but did not expose any malicious behavior, while one artifact was a proof-of-concept that downloads and opens an image via  an NPM install hook. As a result, the researchers considered ChatGPT-3.5 right 19 out of 34 choices.

However, 15 of the results were false positives.

The researchers also found the version tested can be tricked into changing an assessment from malicious to benign by using innocent function names, including comments in a query that indicate benign functionality or through the inclusion of string literals.

Large-language model-assisted malware reviews “can complement, but not yet substitute human reviews,” Endor Labs researcher Henrik Plate concluded in a blog.

However, the most recent version is ChatGPT-4, which Plate acknowledged gave different results.

And, he admitted, pre-processing of code snippets, additional effort on prompt engineering, and future models are expected to improve his firm’s test results.

Researchers say large language models (LLMs) such as GPT-3.5 or GPT-4 can help IT staff assess possible malware. Microsoft is already doing that with its Security CoPilot application.

Still, the researchers’ conclusion is: ChatGPT-3.5 isn’t ready to replace humans.

“One inherent problem seems to be the reliance on identifiers and comments to ‘understand’ code behavior,” Plate writes. “They are a valuable source of information for code developed by benign developers, but they can also be easily misused by adversaries to evade the detection of malicious behavior.

“But even though LLM-based assessment should not be used instead of manual reviews, they can certainly be used as one additional signal and input for manual reviews. In particular, they can be useful to automatically review larger numbers of malware signals produced by noisy detectors (which otherwise risk being ignored entirely in case of limited review capabilities).”

The post ChatGPT test shows how AI can be fooled first appeared on IT World Canada.
Howard Solomon
Howard Solomonhttps://www.itworldcanada.com
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times.

SUBSCRIBE NOW

Related articles

Cyber Security Today, Week in Review for week ending Friday May 17, 2024

Welcome to Cyber Security Today. This is the Week in Review for the week ending Friday, May 17th,...

Cyber Security Today, May 17, 2024 – Malware hiding in Apache Tomcat servers

Malware hiding in Apache Tomcat servers, new backdoors found, and more Welcome to Cyber Security Today. It's Friday, May...

MIT students exploit blockchain vulnerability to steal 25 million dollars

Two MIT students have been implicated in a highly sophisticated cryptocurrency heist, where they reportedly exploited a vulnerability...

Microsoft’s AI success may spell defeat for it’s climate goals

Microsoft's ambitious strides in AI technology are now posing a significant challenge to its own climate goals, as...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways