At the beginning of 2024, the National Institute of Standards and Technology (NIST) issued a warning about cybercriminals and other nefarious actors using Artificial Intelligence (AI) and Adversarial Machine Learning technologies to enhance their malicious operations. There are, of course, state-sponsored threat actors and actor groups who are also focusing on the malicious use of AI in their operations. These include Russia’s Fancy Bear a.k.a. Forrest Blizzard, North Korea’s Kimsuky a.k.a.Emerald Sleet, and Crimson Sandstorm.
DarkOwl, leading experts of the underground digital realm, witnessed threat actors of both groups (state-sponsored and government agnostic, independent actors) actively trading tips on various dark web platforms about the best AI tools to use, as well as effective tactics, techniques, and procedures (TTPs). Throughout the first part of 2024, threats to security stemming from AI have been frequently discussed, and tools were sold on the dark web and dark web adjacent chat platforms, such as Telegram.
This blog aims to take a high-level look at the types of conversations threat actors are having, as well as the tools they are selling, to carry out their mission(s) using malicious techniques and AI tools, so that we can best share the typical uses of AI in malicious operations.
AI trains on massive amounts of data, so a logical threat to begin with is data poisoning. This involves manipulating the information used to train systems, because what is put in shapes the output. Malicious actors intentionally inputting erroneous, biased, or hateful data spreads misinformation, degrades overall performance, and results in biases that can divide and harm society. Online groups have been observed attempting to poison information to produce pro-extremist, pro-violence, pro-war, racist and misogynistic related themes and output at large scale, using AI tools:
Extremist views regarding AI, and what these extremists view as countering “wokeness” are discussed across 4chan, Discord, and the aforementioned Telegram platform, as well as on underground forums.
A separate threat concerns prompt injection, which helps shape the output of AI systems by feeding a system meticulously crafted prompts or cues. When prompts are malicious in nature, this results in malicious output. Incidents involving this could include prompting a system to reveal sensitive, personal data:
Or prompting a system to output racist/sexist hate speech based on biases and maladaptive thinking:
Nightshade, mentioned in the figure below, is a specific tool discussed and sold on the dark web as well as its adjacent platforms. Nightshade arose as a vehicle to help content creators prevent their content from being automatically included into generative AI. Nightshade turns images into “poisoned” samples. If AI using images to train does so without the artists’ consent, or without respect to copyright, these “poisoned” images introduce unexpected and abnormal behavior, changing the image output and introducing errors, degrading the accuracy of the output. Nightshade is considered an offensive tool:
WormGPT emerged as one of the most public, malicious adaptations of an AI model. Unlike other AI tools, the author of WormGPT included no limitations to the tool, which means WormGPT users can use it for malware generation, among other criminal operations. Protective efforts toward another emerging threat, which is automated malware generation, also have a large presence on the dark web and its adjacent platforms. Since inception, certain language models have proved a limited proficiency in computer coding/programming. The more these initial efforts are corrected, trained, and improved, the better the models get at producing malware, and increasing the attack surface. As of now, the cost for many AI tools online is not super expensive, allowing for high sales volume and elevated use:
Protecting systems from malicious AI and enhancing overall security features is still a work in progress when it comes to AI and machine learning in general. The good news is that as quickly as the discussion and implementation of AI tools emerged, simultaneous conversations occurred surrounding the security and protection of these AI tools and systems. The traditional cybersecurity threat intelligence community, still grappling with protecting traditional cyber platforms and tracking bad actors, immediately set to work issuing warnings about the threats facing AI. However, the essential need for this was recognized, and conversations are happening at every level to properly protect AI and machine learning while taking advantage of its benefits.
Products
Services
Use Cases