There’s more evidence that ChatGPT won’t put IT security teams out of work — yet.
Researchers at Endor Labs tested ChatGPT 3.5 against 1,870 artifacts from the PyPi and NPN open-source repositories of code. It identified 34 as having malware. However, only 13 really had bad code. Five others did have obfuscated code but did not expose any malicious behavior, while one artifact was a proof-of-concept that downloads and opens an image via an NPM install hook. As a result, the researchers considered ChatGPT-3.5 right 19 out of 34 choices.
However, 15 of the results were false positives.
The researchers also found the version tested can be tricked into changing an assessment from malicious to benign by using innocent function names, including comments in a query that indicate benign functionality or through the inclusion of string literals.
Large-language model-assisted malware reviews “can complement, but not yet substitute human reviews,” Endor Labs researcher Henrik Plate concluded in a blog.
However, the most recent version is ChatGPT-4, which Plate acknowledged gave different results.
And, he admitted, pre-processing of code snippets, additional effort on prompt engineering, and future models are expected to improve his firm’s test results.
Researchers say large language models (LLMs) such as GPT-3.5 or GPT-4 can help IT staff assess possible malware. Microsoft is already doing that with its Security CoPilot application.
Still, the researchers’ conclusion is: ChatGPT-3.5 isn’t ready to replace humans.
“One inherent problem seems to be the reliance on identifiers and comments to ‘understand’ code behavior,” Plate writes. “They are a valuable source of information for code developed by benign developers, but they can also be easily misused by adversaries to evade the detection of malicious behavior.
“But even though LLM-based assessment should not be used instead of manual reviews, they can certainly be used as one additional signal and input for manual reviews. In particular, they can be useful to automatically review larger numbers of malware signals produced by noisy detectors (which otherwise risk being ignored entirely in case of limited review capabilities).”
The post ChatGPT test shows how AI can be fooled first appeared on IT World Canada.