Artificial intelligence has quickly become one of the most disruptive forces in cybersecurity. On the surface, AI promises efficiency, smarter defenses, and automation. But it is also being exploited by criminals in underground forums and marketplaces. The darknet has always been a hub for phishing kits, ransomware gangs, and stolen data markets. What has changed is the speed and polish of those attacks. AI has not created new crimes, but it has made the old one’s sharper, more scalable, and harder to defend against.
To understand the risks, you need to look closely at how threat actors are adopting AI in three areas where the damage is already visible: phishing, ransomware, and stealer logs. Alongside that, it’s worth exploring how the darknet economy itself is shifting to a subscription-based model that feels eerily similar to legitimate tech marketplaces.
Phishing is one of the oldest tricks in the book. Traditionally, it relied on blasting out mass emails and hoping a few recipients clicked on malicious links. These campaigns were often riddled with error, bad grammar, odd formatting, and suspicious sender addresses. They worked well enough to snare the unwary, but many were easy to spot.
AI has changed that. In 2023, tools like FraudGPT and WormGPT appeared for sale across darknet forums and Telegram channels. FraudGPT was promoted as a chatbot with “no limitations, no filters, no boundaries.” It promised to help criminals craft polished phishing emails, generate fake websites, and even produce malicious code. Sellers marketed it in the same way a SaaS startup would market legitimate tools, with clear feature lists and monthly or annual subscription options. Reports suggest prices started around $200 per month or $1,700 per year, and the tool quickly gained traction among low-skill actors.
WormGPT took a similar path. Built on GPT-J, an open-source large language model, it was pitched as a blackhat version of ChatGPT. Access was sold for about $110 per month. Its purpose was direct and simple: create convincing phishing emails at scale. No broken grammar, no obvious red flags, just messages that looked like they came from HR, finance, or a trusted business partner.
The sophistication of phishing is no longer limited to email. Voice cloning and deepfakes have introduced new angles. A call that sounds exactly like your CEO asking for an urgent wire transfer is no longer a far-fetched scenario. In fact, there have already been documented cases where voice cloning was used to defraud companies out of millions. With AI, creating those convincing imitations is faster, cheaper, and accessible to far more actors.
Phishing is no longer amateur hour. It is a professionalized service where attackers can outsource creativity to AI.
Ransomware groups are also adapting AI to their playbooks. Their goal is still the same: encrypt critical systems, steal sensitive data, and demand payment. But AI is streamlining the process.
Some ransomware crews are using AI to refine malicious code and bypass defenses more effectively. Others are experimenting with automated infection chains where AI scripts help identify weak points in networks and tailor payloads to exploit them. In some cases, AI has even been proposed for ransom negotiations, where chatbots could pressure victims with manipulative tactics and personalized responses.
This isn’t happening in a vacuum. Ransomware gangs are structured like businesses. They often run affiliate programs, recruit developers, and maintain support channels for buyers. AI fits neatly into that structure. It reduces the technical barrier, speeds up development, and frees attackers to scale operations.
The real danger is not just that AI makes ransomware more efficient. It also makes entry into ransomware easier. Someone with little coding experience can join an affiliate program, buy access to AI tools, and launch a campaign without building malware from scratch. The result is more actors competing for victims, which increases the volume of attacks globally.
If phishing is the entry point and ransomware is the hammer, stealer logs are the raw material that fuels countless other crimes. A stealer log is a collection of data siphoned from an infected machine: usernames, passwords, browser cookies, autofill data, cryptocurrency wallets, system details. For years, these logs have been sold in bulk on darknet markets.
AI has made them far easier to exploit. Instead of combing through messy text files manually, criminals now use AI-driven tools to parse, filter, and prioritize data. They can search for keywords like “PayPal” or “VPN” and instantly extract the most valuable credentials. Dashboards sold with these logs make it simple for even unskilled actors to profit.
Consider Rhadamanthys, a stealer that first appeared in late 2022. By mid 2024, version 0.7.0 introduced an unusual AI-powered capability: optical character recognition. It could scan images on infected devices and extract text, including cryptocurrency wallet seed phrases. This meant that even if users thought they were safe storing keys as screenshots, the malware could still retrieve them.
Rhadamanthys is sold openly on forums. Licenses go for about $250 per month or $550 for 90 days. Its operators actively update the malware, provide customer support via Telegram, and advertise new features. In 2024, it was deployed through phishing campaigns disguised as copyright infringement notices, targeting victims across Europe, Asia, and the Americas.
Beyond individual families, the stealer ecosystem is vast. Russian Market alone lists millions of stolen logs, and services like MoonCloud repackage them into searchable databases distributed via Telegram. These markets are increasingly structured and automated, looking more like data brokers than ad-hoc criminal sales.
One of the most striking trends is how the darknet has adopted the language and business model of the tech industry. Gone are the days of one-off toolkits passed quietly between hackers. Today, the underground thrives on subscriptions and services.
Fraud as a service. Phishing as a service. Ransomware as a service. Infostealers with monthly licensing models. AI has lowered the barrier to entry so far that the ecosystem resembles a SaaS marketplace more than a shadowy corner of the web. For a few hundred dollars a month, anyone can buy access to tools that rival those used by advanced threat groups.
This professionalization is why the threat landscape feels so much more crowded. More people can play the game. The cost of entry is low. And the tools are good enough to work.
If criminals are scaling with AI, defenders cannot rely on traditional defenses alone. Organizations need visibility into the spaces where these tools are sold and discussed. That is where DarkOwl provides value.
DarkOwl monitors darknet forums, encrypted channels, and marketplaces where AI-enabled tools and stolen data appear. It can identify when a new phishing kit is advertised, when stealer logs containing company credentials are posted, or when chatter about impersonation campaigns surfaces. More importantly, DarkOwl delivers context. A stolen password alone is one data point. Context explains whether it is tied to a broader campaign, how it was obtained, and whether similar data is being circulated elsewhere.
This intelligence is not meant to sit in a report. Organizations can act on it by building alerting workflows, so security teams are notified when company credentials show up in stealer logs, updating phishing playbooks with new lures seen in underground communities, and protecting executives and brands by monitoring for deepfake or impersonation campaigns.
DarkOwl does not just collect data; it helps organizations use it. That difference is what turns visibility into defense.
AI has not changed the fundamentals of cybercrime. Criminals are still phishing, encrypting, and stealing. What has changed is the scale and accessibility. FraudGPT makes phishing believable. WormGPT mass-produces scams. Rhadamanthys uses AI to scrape sensitive data from images. Marketplaces sell logs with dashboards and filters that look like professional analytics tools. The Darknet is evolving, and AI is accelerating the pace.
The world cannot afford to ignore that shift. Defenders need to see what is happening in the underground as it unfolds. DarkOwl delivers that window, giving organizations the ability to anticipate threats, connect the dots, and respond before AI-driven attacks land.
Products
Services
Use Cases