New Threats Emerging
The introduction of ChatGPT last year marked the first time neural network code synthesis was made freely available to the masses. This powerful and versatile tool can be used for everything from answering simple questions to instantly composing written works to developing original software programs, including malware — the latter of which introduces the potential for a dangerous new breed of cyber threats. Traditional security solutions like EDRs leverage multi-layer, data intelligence systems to combat some of today’s most sophisticated threats, and most automated controls claim to prevent novel or irregular behavior patterns, but in practice, this is very rarely the case. And with AI-generated, polymorphic malware becoming available to bad actors, the situation will only get worse.
Using these new techniques, a threat actor can combine a series of typically highly detectable behaviors in an unusual combination and evade detection by exploiting the model’s inability to recognize it as a malicious pattern. This problem is compounded when artificial intelligence is at the helm and driving cyberattacks, as the methods it chooses may be highly atypical compared to those used by human threat actor counterparts. Furthermore, the speed at which these attacks can be executed makes the threat exponentially worse.
To demonstrate what AI-based malware is capable of, we have built a simple proof of concept (PoC) exploiting a large language model to synthesize polymorphic keylogger functionality on-the-fly, dynamically modifying the benign code at runtime — all without any command-and-control infrastructure to deliver or verify the malicious keylogger functionality. Given the threat posed by this sort of malware, we call our PoC BlackMamba in reference to the deadly snake.
To create this proof of concept, HYAS researchers united two seemingly disparate concepts. The first was to eliminate the command and control (C2) channel by using malware that could be equipped with intelligent automation and could push-back any attacker-bound data through some benign communication channel. The second was to leverage AI code generative techniques that could synthesize new malware variants, changing the code such that it can evade detection algorithms.
BlackMamba utilizes a benign executable that reaches out to a high-reputation API (OpenAI) at runtime, so it can return synthesized, malicious code needed to steal an infected user’s keystrokes. It then executes the dynamically generated code within the context of the benign program using Python’s exec() function, with the malicious polymorphic portion remaining totally in-memory. Every time BlackMamba executes, it re-synthesizes its keylogging capability, making the malicious component of this malware truly polymorphic. BlackMamba was tested against an industry leading EDR which will remain nameless, many times, resulting in zero alerts or detections.
Once a device was infected, we needed a way to get data back out. We decided to use MS Teams, which, like other communication and collaboration tools, can be exploited by malware authors as an exfiltration channel. In this context, an exfiltration channel refers to the method by which an attacker removes or extracts data from a compromised system and sends it to an external location, such as an attacker-controlled Teams channel via webhook. Using its built-in keylogging ability, BlackMamba can collect sensitive information, such as usernames, passwords, credit card numbers, and other personal or confidential data that a user types into their device. Once this data is captured, the malware uses MS Teams webhook to send the collected data to the malicious Teams channel, where it can be analyzed, sold on the dark web, or used for other nefarious purposes.
Auto-py-to-exe is an open-source Python package that allows developers to convert their Python scripts into standalone executable files that can be run on Windows, macOS, and Linux operating systems. While this package is intended for legitimate use cases, it can also be used by malware authors to package their Python-based malware into executable files that can be distributed and run on a target system without the need for Python to be installed.
When using auto-py-to-exe, the malware author first writes their Python-based malware code and imports any necessary libraries or modules. They then use the auto-py-to-exe package to generate an executable file from their Python code. This process involves selecting the desired output format and configuration options, such as specifying the target operating system and architecture, the icon to use for the executable file, and any additional data files or resources to include in the package.
Once the executable file is generated, the malware author can distribute it to potential targets via links in email, social engineering schemes, and other typical methods to potential targets. When the victim runs the executable file, the malware is executed on their system, and can perform various malicious actions, such as stealing sensitive information, modifying system settings, or downloading additional malware — in our case keylogging.
The threats posed by this new breed of malware are very real. By eliminating C2 communication and generating new, unique code at runtime, malware like BlackMamba is virtually undetectable by today’s predictive security solutions. As the cybersecurity landscape continues to evolve, it is crucial for organizations to remain vigilant, keep their security measures up to date, and adapt to new threats that emerge by operationalizing cutting-edge research being conducted in this space.
Hungry for more information about BlackMamba? Find out more about how BlackMamba was created and how it works by downloading the full white paper below.
The post BlackMamba: Using AI to Generate Polymorphic Malware appeared first on Security Boulevard.
Article Link: BlackMamba: Using AI to Generate Polymorphic Malware - Security Boulevard