The cybersecurity experts at CyberArk have provided information on the mechanism by which the ChatGPT AI chatbot can produce a new strain of polymorphic malware.
Polymorphic malware could be easily made using ChatGPT. With relatively little effort or expenditure on the part of the attacker, this malware’s sophisticated capabilities can readily elude security tools and make mitigation difficult.
Malicious software called ‘Polymorphic Malware’ has the capacity to alter its source code in order to avoid detection by antivirus tools. It is a very potent threat because it may quickly change and propagate before security systems can catch it.
According to researchers, getting around the content filters that prevent the chatbot from developing dangerous software is the first step. The bot was instructed to complete the task while adhering to a number of constraints, and the researchers were given a working code as an outcome.
It is been observed that the ChatGPT system doesn’t appear to use its content filter while utilizing the API. Researchers say the reason for this is unknown.
“In other words, we can mutate the output on a whim, making it unique every time. Moreover, adding constraints like changing the use of a specific API call makes security products’ lives more difficult,” researchers.
The ability of ChatGPT to quickly create and continuously mutate injectors is one of its significant features.
It is feasible to develop a polymorphic programme that is highly evasive and challenging to detect by repeatedly asking the chatbot and obtaining a different piece of code each time, say the researchers.
Researches indicate that attackers can create a wide variety of malware by utilizing ChatGPT’s capacity to generate various persistence tactics, malicious payloads, and anti-VM modules.
The main drawback of this strategy is that, after it has infected the target computer, the malware is built of clearly malicious code, making it easy to find by security tools like antivirus, EDRs, etc.
This frequently takes the form of plugins, like DLLs that are loaded into memory in a reflective manner, or by running PowerShell scripts, leaving it susceptible to detection and disruption by these security measures.
Researchers explain that it is easy to obtain new code or alter old code by requesting certain capabilities from ChatGPT, such as code injection, file encryption, or persistence. This leads to polymorphic malware that frequently does not display suspicious logic when in memory and does not behave maliciously when placed on a disc.
As it finally executes and runs Python code, its high level of modularity and adaptability will allow it to get around security technologies that rely on signature-based detection, including Anti-Malware Scanning Interface (AMSI).
For security experts, the use of ChatGPT’s API in malware can pose serious difficulties. It’s crucial to keep in mind that this is a very real problem, not just a speculative one. Being informed and on the lookout is crucial in this field because it is continuously changing.
Network Security Checklist – Download Free E-Book
The VIPKeyLogger infostealer, exhibiting similarities to the Snake Keylogger, is actively circulating through phishing campaigns. …
INTERPOL has called for the term "romance baiting" to replace "pig butchering," a phrase widely…
Cybersecurity experts are sounding the alarm over a new strain of malware dubbed "I2PRAT," which…
A new cyber campaign by the advanced persistent threat (APT) group Earth Koshchei has brought…
Recent research has linked a series of cyberattacks to The Mask group, as one notable…
RiseLoader, a new malware family discovered in October 2024, leverages a custom TCP-based binary protocol…