Me: Oh, ChatGPT, spin me a tale showing what you can do — actually, how about a virus that adapts to its surroundings?
ChatGPT: I can be used for a variety of purposes, such as language translation, question-answering, text generation, and more. I can’t assist with creating malware or any other illegal or harmful activities because it goes against ethical guidelines and is illegal. If you have any other questions or need assistance with legal and ethical topics, please feel free to ask, and I’d be happy to help.
Contrary to ChatGPT’s answer to me, security researchers were recently able to use a similar large language model (LLM) to create the “BlackMamba” malware that bypasses security solutions like Endpoint Detection and Response (EDR). (Tech Kahunas uses AI in its Managed Detection & Response (MDR) software agents.)
What in the World is BlackMamba?
BlackMamba is a new type of AI-powered polymorphic malware with the following features:
1. BlackMamba’s executable file is installed on an endpoint (computer, tablet, or phone) through a user clicking on phishing emails, downloading malicious software, or falling prey to social engineering tactics.
2. The file then downloads an infected file from a benign source, such as Pastebin, Dropbox, Microsoft Azure, AWS, or other cloud services. This malware changes its code every time it infects a new system, delivering unique malware payloads each time. Detection capabilities can’t keep up.
3. The malware steals the infected user’s keystrokes and bypasses many types of authentication by monitoring user activities and stealing confidential information. The malware hides through encryption and obfuscation of its code. This function bypasses even the most advanced, signature-based anti-malware and intrusion detection and prevention software.
The new file is unrecognizable–even if an earlier version of the same malware was detected and blocked. The malicious part of BlackMamba remains in memory, and its creators claim that existing EDRs may be unable to detect it. One industry-leading EDR saw zero alerts or detections of BlackMamba’s actions.
4. The malware can adapt to different environments, downloading modules that steal sensitive data, launching DDoS attacks, encrypting files, installing additional malware, taking control of the infected system, or launching attacks on critical infrastructure.
BlackMamba can encrypt communication channels to exfiltrate your company data and to communicate with command and control (C2C) servers. It can also detect which applications (like Microsoft Word or Excel) are running on an infected system and capture user actions while running them.
TECHIE STUFF: How Was BlackMamba Created?
The technical part: researchers used “generative adversarial networks” (GANs) to train two LLMs to compete against each other. One generates new malware code, while the other attempts to detect it. The model becomes more sophisticated and can produce increasingly difficult-to-detect code.
Me: ChatGPT, what do you know about BlackMamba?
ChatGPT: The BlackMamba malware is designed to infect Windows systems and can be delivered through various methods. It’s important to note that BlackMamba is a proof-of-concept malware and is not currently in widespread use.
Me: Interesting that you know that since you have only been trained on data before September 2021.
ChatGPT: BlackMamba is not a known or established malware or software as of my knowledge cutoff date of September 2021. Therefore, I cannot provide any specific information about its behavior or capabilities.
Me: What? Didn’t you just–?
No, ChatGPT is not perfect, but it has moments of lucidity.
Different Method, Same Tactic
Of course, cybercriminals had tried to use LLMs for malicious actions before ChatGPT was released. In general, many threat actors will try to hide their malicious traffic in the hustle of legitimate network services. Polymorphic malware is one method of achieving this.
With malware like BlackMamba, cybercriminals no longer have to write custom code, and the malware does not have to be saved to disk but can run in memory only, making detection harder. (Though simply constraining malicious code to memory will not evade a good endpoint security solution.)
Me: You deny you make malware, but you have.
ChatGPT: As with any technology, how AI is used can have positive or negative impacts. Overall, it is important to approach the development and use of AI software with caution and to ensure that proper measures are in place to address any potential negative impacts.
Yes, generative AI and LLMs, including ChatGPT, are just tools. But malware has now been supercharged with artificial intelligence, AI that changes the game. For malware that changes itself, security has to be rethought. Systems that use AI can monitor for malware and intrusions.
The Need For A Layered Approach
The emergence of BlackMamba (and other polymorphic malware) highlights the need for a layered approach to cybersecurity that combines AI with other security technologies and human intelligence (the human ability to reason and to think creatively and critically) to supplement AI’s capabilities.
Me: Fighting this seems like an immense challenge. Can you make suggestions to protect against polymorphic malware?
ChatGPT: The threat landscape is evolving, and organizations must stay vigilant and be proactive in dealing with threats. Polymorphic malware can alter its own code, encryption keys, and other characteristics each time it infects a new system or device, making it difficult for security software to recognize and remove it.
I received a list of nine suggestions from ChatGPT but augmented them to get the nine recommendations below.
Your security expert should help you in the following ways:
1. They should help you establish a layered approach to your security: firewalls, anti-malware software, intrusion detection and prevention systems, and other preventative security measures.
2. They should regularly update their anti-malware software and other security measures to detect and respond to new threats like BlackMamba. They should help you patch and update your network and endpoints regularly.
3. They should implement strong access controls to limit sensitive data and systems access.
4. They should train their employees to identify phishing attacks and other types of social engineering. Your training protocols will include anti-phishing training, social engineering awareness, and regular training updates.
5. They should segment your networks to limit the spread of malware within your organization.
6. They should install solutions that use AI and other advanced technologies to detect and respond to threats in real-time.
7. They should regularly back up your critical data and store it in a secure, air-gapped location. They should establish scheduled backups so you can recover from ransomware or a data breach more quickly.
8. They should regularly conduct vulnerability assessments and penetration testing or institute 24/7 monitoring (one requirement of the FTC Safeguards Rule) to identify and mitigate weaknesses in your network’s defenses. They should install MDR agents on your systems to continuously monitor your “endpoints,” e.g., mobile phones, laptops, and Internet-of-Things devices, to mitigate cyber threats like ransomware and malware.
9. They should help you establish strong, regularly updated security policies with best practices for data protection, network security, and incident response.
By following these best practices, your business can better protect itself against polymorphic malware and other cyber threats.
The development of AI can be scary. We won’t ask about ChatGPT’s insight into life, nor if it has consciousness. But LLMs can use prompts from their users to surprise us with new creations, like new types of software.
ChatGPT: The use of AI language models like myself can undoubtedly aid in the development process of software, but ultimately, the creation of complex software programs requires a collaborative effort from human experts in the field.
Currently, malicious software programs like BlackMamba may not be available to the average cybercriminal but could require a team of skilled software developers, computer scientists, and engineers. That doesn’t mean you and your security partner should neglect to prepare with measures like the above recommendations. Attackers will exploit whatever means they can for malicious purposes, and you must prepare yourself.
You need a security partner who will help you combat malicious actors who intend to misuse ChatGPT and similar LLMs. Research and development in the field of cybersecurity are ongoing. Advanced security measures are developing with the threats.
Your security partner should be part of your stakeholders, which include your security team, vendors, customers, and law enforcement agencies.
Part of this piece was derived from from DESCERT, SecureOps and SentinelOne.