In the report, HYAS noted that, with ChatGPT and other generative AI tools available today, cybercriminals can leverage “neural network code synthesis” to develop highly sophisticated malware that is unpredictable and can outsmart most present-day security solutions. AI-generated malware can also be executed much faster and display “highly atypical” behaviors, thus, go undetected. To demonstrate this, HYAS developed a basic proof of concept (PoC) called BlackMamba. BlackMamba exploits “a large language model to synthesize polymorphic keylogger functionality on-the-fly, dynamically modifying benign code at runtime” without the need for an attack server, the Canada-based cybersecurity firm said. “BlackMamba represents a shift in malware design and deployment,” Principal Security Engineer at HYAS and author of the report, Jeff Sims, told VPNOverview. Such malware “could include a whole host of post-exploitation capabilities which are stored in the executable as benign text prompts (most likely as encrypted strings), waiting to be passed to a large language model like GPT-3 and synthesized into malicious code,” Sims explained.
BlackMamba: A New Breed of Malware
HYAS’ BlackMamba doesn’t depend on a human-controlled command and control (C2) server, which is common in most malware attacks that typically start with a phishing email. Instead, it uses “intelligent automation” and can exfiltrate data to threat actors over regular communication channels, like Microsoft Teams. BlackMamba surreptitiously reaches out to OpenAI at runtime for a dynamically, generated malicious code, which is executes “using Python’s exec() function, with the malicious polymorphic portion remaining totally in-memory,” the report said. Each time BlackMamba runs, it can “re-synthesize” its keylogger capability, making it truly polymorphic. BlackMamba leverages “AI code generative techniques that could synthesize new malware variants, changing code such that it can evade detection algorithms.” HYAS said BlackMamba can collect passwords, usernames, financial information, and other data. This is how, in theory, cybercriminals could use AI-generated stealth malware to steal data, which can be sold on the dark web or used for other criminal schemes. BlackMamba was tested against an unnamed, “industry leading” security solution several times, and was not detected even once, HYAS said.
Defense Against Future AI-Based Threats
Sims noted that, for such novel attacks, creating awareness and developing security controls proactively is key. “The best defense right now for organizations is awareness, allowing their human hunt teams to be on the lookout for this type of activity, while automated controls are iterated-on to catch and detect this type of attack,” Sims said. Cybercriminals closely follow the latest advances and trends in technology. Right now, generative AI is making a splash. As such, we are seeing scams that leverage AI tools. Earlier this month, we reported on the rise of AI-driven voice cloning scams. In a January report, Check Point Research said generative AI also allows “less-skilled threat actors effortlessly launch cyberattacks.” In most cases, cybercriminals depend on getting you to click on or download an infected file to gain access to your system. For this reason, we recommend you learn about social engineering schemes, and how to avoid them. Our guide to social engineering contains useful information on how you can avoid these scams.