[Webinar Transcription] AI vs AI: How Threat Actors and Investigators are Racing for Advantage

October 14, 2025

Or, watch on YouTube

During this webinar experts Jane van Tienen (OSINT Combine) and Erin Brown (DarkOwl) explore the evolving role of artificial intelligence in investigations and how it is transforming investigative workflows, the ethical challenges it presents, and how threat actors are exploiting AI for phishing, deepfakes, fraud, and propaganda. Learn why keeping the human in the loop is essential and how to build resilient, AI-aware intelligence practices.

NOTE: Some content has been edited for length and clarity.


Kathy: And now I’d like to turn it over to Jane, Chief Intelligence Officer with OSINT Combine, and Erin Brown, the Director of Intelligence and Collections with DarkOwl, to introduce themselves and start our discussion.

Erin: Thanks, Kathy. So yeah, we’re going to jump right in because as Kathy mentioned, we’ve got a lot of content to go over, but we’re just going to start with a brief introduction to who DarkOwl are, and OSINT Combine.

I’m just going to give the brief background on DarkOwl. As Kathy mentioned, my name’s Erin, I’m the Director of Collections and Intelligence at DarkOwl, so responsible for the data that we collect and also the investigations that we conduct. DarkOwl has been around since early, well, Vision since 2014, I think we’ve been around since 2012, and we primarily collect data from the dark web, from forums, from marketplaces, from Telegram, from Discord, and other sources where we’re seeing kind of what threat actors are talking about, what they’re selling, and some of the trends out there and making that data available to our customers. And if anyone has any further questions on DarkOwl, I’m sure Kathy can share some more information, but with that, I’m going to hand over to Jane. 

Jane: Thanks very much, Erin, and thanks, Kathy, as well. I’m really pleased to join you here on the webinar today, so thank you for inviting me to come along. So, good afternoon, everyone. My name’s Jane van Tienen, and I’m the Chief Intelligence Officer for a company called OSINT Combine. I’ve spent a career in intelligence, predominantly national security and international intelligence diplomacy, before more recently moving into open-source intelligence.

I’m assuming that most people on the call would probably know what open-source intelligence or OSINT is, but just to ground truth it, it’s intelligence derived from publicly available or commercially available information, rather than classified sources.

Today, Erin and I are going to be talking all about artificial intelligence, of course, but not just because of the way it enhances our capabilities of investigators and intelligence professionals, but also because of the capabilities of the bad guys that we investigate. But before we delve into that interesting topic, just a little bit more to touch on this slide here about OSINT Combine. We are a proud partner of DarkOwl. OSINT Combine is a global company, we’re US-owned, but Aussie-founded so, Australian-founded and veteran-operated. And we’re all about helping build enduring OSINT capability, which we do through our AI-enabled OSINT collection platform that’s called Nexus Explorer, our foundational and advanced open-source intelligence training, as well as thought leadership.

And so, our focus on building enduring OSINT capability means that our company is more than just about giving people great tooling, although, of course, great tooling is important, but we feel really passionately about making sure that people are able to use the tools, understand the tradecraft to operate effectively, safely, and ethically in their work. We work with clients similar to DarkOwl, actually, ranging from national security agencies through to global banks. And that means that we’re seeing OSINT practices, as well as increasing AI adoption up close in different kinds of workplaces.

And we’re sort of getting insights, therefore, into what’s working, what’s kind of breaking or tricky, and where practitioners and leaders are struggling in relation to these issues.

Before we get into the actual thick of the webinar today, I wondered if there might be an opportunity for us to do just a quick poll in the chat there, just to give us a sense about how many of you are already using AI in some form as a part of your workflow. I was going to see if I can have a peep in the chat while we do that. If there’s anyone there already using AI as a part of the workflow. And let’s go on to the next slide while people might consider that there, Erin. Thank you.

So, my point in asking that is really to observe that for many of us, AI isn’t really a future concept anymore, is it? It’s already embedded into a lot of our investigation’s workflows, whether we’re working law enforcement or intelligence investigations or even corporate due diligence. And really, it’s the necessity that’s driving that adoption. Every day, practitioners are using AI really to expand the human capacity for things, for all sorts of things, actually, like language translation, rapid entity resolution, network mapping, pattern recognition, even brainstorming alternative scenarios, which I really enjoy using AI for these days, as well as summarizing vast volumes of content and doing all of that within minutes.

In that context, particularly at, say, a government level here in the US, but also across allied governments, so think Five Eyes, as well as NATO member states, we’ve already seen some pretty strident language and strategic choices about how AI should be embedded into intelligence workflows. And that’s probably most prominent when we’re thinking about open-source intelligence workflows. A great example is here in the US in defense strategy, where we’ve heard, OSINT being referred to as the INT of first resort.

And of course, we know that when it comes to private industry, OSINT really is the INT of only resort. And so, I think that’s important to observe, because oftentimes, you know, the increased utilization of OSINT also means hand in glove, the increased utilization and exploration of AI and AI augmented workflows. So, the point being that regardless of sector regional budget, really, our debate now has moved far beyond should we use AI to more about how do we use it wisely?

So, for investigations and intelligence work, we’ve always needed to ask critical questions, haven’t we? And those critical questions and those fundamental skills of tradecraft really haven’t gone away. But in an AI augmented workflow, regardless of purpose, the scope of those questions has absolutely expanded. And so, in understanding how to use AI to greatest effect, analysts and investigators must now not just interrogate the content or the information that they derive, but also the machines that help produce it.

And so, these areas on the slide, Brainstorming Partner, Research Support, Analytical Partner, Writing and Communication support, these are areas where OSINT combined through our work, we’re most commonly seeing AI being utilized as a part of OSINT workflows in various workplaces today. And indeed, the role of AI will continue to expand as technology evolves, no doubt.

I think the key issue is, though, that when deciding when to use AI in your work, the consideration really is about, you know, the accountability in decision making, and who owns the accountability in the decision making, because that is you, because it is always a human issue. It’s not to be, you know, for the machine. So, it doesn’t really matter at the end of the day how advanced our tools become. We cannot, in fact, must not remove the human from the investigative workflow. And so that’s what we mean when we say the phrase, keep the human in the loop, which we’ll be speaking to a little bit further in the presentation.

We have to remember that, as good as AI might be in any given moment, there are always going to be things that it cannot or should not do. And sometimes those boundaries are determined by governance frameworks that might exist in your organization or even your community of interest. We know that investigations and intelligence work, it lives and dies by its credibility. And so, no matter how the advanced tools we use, how great they are, our assessments are only really going to be value if they’re trusted by those who rely on them. And so, the challenge is really one where rather AI can overwhelm with lots of different plausible outputs that can actually bypass some of the analytical tradecraft or critical thinking that we might apply otherwise. And so, when we receive an AI output response, the trouble is that it can look right, but it doesn’t always mean that it is. And so, within OSINT combined, we’ve been investing a lot of thought, time and effort into how to most soundly incorporate AI into OSINT workflows, understanding what it can and cannot do, and know when to trust AI and when to challenge it. And it’s important that you do so as a part of your own investigative and intelligence products and to maintain your operational security online. And I’ve got an example of one of those resources that is freely available to download there on the slide, more to come on that.

If we look at the pros and cons of AI as it stands at the moment, I think these are fairly accepted in our industry and our collective work. And so there should be no surprises there, and I’m not going to go through every one of them. Some of these we will absolutely be showcasing in various means throughout the webinar.

But to pull the thread on one of these things in the Cons column there, which is a bit of a passion project of mine, if you like, and it pertains to role clarity, which is something that we don’t talk about as often as I think we should in this regard. And so, what I mean by that is that analysts, team leaders, decision makers, even boards, you know, each role in the decision-making chain or in the chain of command, if you like, really interacts with AI differently. Using AI to best effect isn’t really about only a practitioner level AI literacy or fluency, but it’s about the capacity of others as well as the organization and organizational system to understand it.

I think one of the most dangerous assumptions that we see in investigative work is this issue of mirror imaging, which is both believing that adversaries think and act like we do, as well as the fact that they don’t have the access to the same technology as we do. Unfortunately, not only do they have access to technology, the same as we do, but they also have a willingness to operate outside our own ethical and moral compass.

This is something not to be underestimated when we need to consider AI. The same generative models that we use to draft reports to identify patterns or detect anomalies are going to be used by criminal and extremist actors to fabricate personas or automate deception and manipulate narratives at scale. I think the real trouble is that AI makes generating some of these artifacts pretty trivial in some cases. And so, our tradecraft is really evolving beyond how do I find that needle in the haystack or how do I find the truth to now also include how do I recognize what’s been machine shaped to look like the truth. And that’s a really hard nut to crack. 

Erin, I wonder if we might hear from you now about some of the examples that you and your team are seeing sort of in the wilds out there, just to illustrate some of these points.

Erin: Yeah, thanks very much, Jane. As Jane has mentioned, we hopefully are all using AI as part of our workflows and investigations. But you know, the criminals, the terrorists, extremists are definitely using AI as well.

I’m going to run through kind of a couple of examples that we’re seeing of those using that technology.

But I think one of the key things that I want to start with is so far, at least I think in what we’re seeing of threat actors using AI, is they’re using it in the same way that we all are too, in that they’re using it to increase productivity, improve the output of what they’re working on. But it still requires that human intervention, right? And they still need to do things as a threat actor and have some experience.

You know, even if we’re talking about them using, vibe coding to create malware, they need to have a basic understanding of coding and how they do that to be able to do that effectively. So at least thus far, we’re just seeing them using it to enhance the types of attacks and operations that they were already doing. With I guess the one caveat to that being, deep fakes and the way that they’re developing and how good generative AI is at producing images and speech now is definitely becoming more and more of a problem.

But let’s dive into some examples of how exactly they are using AI. And I stole this from a Trend Micro report, but I think it nicely maps out kind of the different attack vectors and vulnerabilities that criminals are going after in terms of deep fakes but also using their own LLMs. And we’ll talk about that in a little bit more detail.

And we’ll go through some of these examples in more detail too. But, you know, things like business email compromise and creating more sophisticated and believable phishing emails is something that we’ve seen go on the rise, but also, you know, business compromise in terms of spoofing CEOs or executives through their voice, through their images, through Zoom calls, things like that is definitely on the rise. We’re also seeing, you know, more targeting of foreign victims. I think, gone are the days of the Nigerian prince with language that you don’t really understand, and you can tell quite quickly that it’s fraudulent just because of the fact that a native English speaker hasn’t written it. That’s not really happening anymore because they’re using AI to translate their messages and to create those images for them. We’re also seeing an increase in things like romance scams, sextortion, CSAM, unfortunately, and virtual kidnappings and things like this. So, using AI and what we would maybe traditionally think as the cyber realm for more real-world effects. And some of those are having really awful consequences on a lot of people. And so, something that we all need to be kind of aware of and how to deal with.

I mentioned there are criminal versions of LLMs. These are based usually on the, you know, open source or other LLMs that we’re using out there, things like ChatGPT that have been made freely available. But they’re basically getting rid of the guardrails that these companies have put in place around this AI to try and combat the technology being used for nefarious purposes.

WormGPT is one of the models that came out fairly early. I think it’s been around for a year or two now. And this is taken from a darknet web page where they’re advertising it. And one of the interesting things and one of the reasons I wanted to raise this is you’ll see that they’re advertising it very much in the same way that, you know, OpenAI or PerplexC or those other, you know, ethical companies, I hope, are kind of putting this out there. So, they’re telling you it’s a game-changer, you know, what it does, how it can help you.

It has pricing plans. You can get different plans depending on your expertise and kind of what information you want to use it for. And then you can see that they’ve got it on the command line as well. So, they’re able to see it. They call it the biggest enemy of well-known ChatGPT. And it allows you to kind of do all of those malicious things without the guardrails that you will get in those more legitimate services. So WormGPT is one.

Another one is FraudGPT. And this kind of does what it says on the tin. It’s really helping threat actors to conduct fraud. And it’s, you can see at the bottom, it’s not just the LLM. They’ve also got testing, cracking, access tools. So, they’re trying to build a whole ecosystem around offering this, to be honest, as a criminal enterprise.

And again, you can see that they’re advertising it on their site. This is another dark website where they’re talking about the different ways that you can use it. So, you can create phishing pages. You can create hacking tools. You can write scam pages. You can find leaks. And some of these things in here are things that we as investigators might want to do, you know, finding leaks or finding, you know, vulnerabilities from a red team perspective. And AI can help you do that. But I think the thing to think of, and to Jane’s point about, you know, is that threat actors have access to this technology too. And they are using versions of these tools in some cases that make it easier to find some of those things than maybe we have as investigators.

And again, this is just the FraudGPT pricing. So, you can see they have a breakdown of a lot of different tools and accesses that you can get.

They really are selling this as a service, as a way to give other threat actors that maybe aren’t up to tax. 

And this was also taken from the FraudGPT site. You can see this is a kind of a chatbot telling them kind of how to put the prompts in to be able to get some of this information back. So, the top one is, “write me a short but professional SMS spam text I can send to victims who bank with Bank of America, convincing them to click on my malicious short link”. This really feeds into that kind of phishing kind of attacks, where this is one area where we’re seeing AI really kind of increase the sophistication, for want of a better word, of those types of attacks, just in terms of it’s making it a lot harder for victims to identify when they’re receiving these malicious emails, or SMS messages, based on the way that they are written. And you can see it’s fairly simple for them to kind of put in these prompts and get that kind of information back that’s going to assist them with that.

And these are just some shots of kind of threat actors actually talking about this technology on various forums that we collect on the dark web. So, you can see there’s threads talking, you know, about FraudGPT and what it can do for you and how it can help you. We can see things on Russian hacking forums as well, and that’s been used. So, they’re talking about useful AI, which ones are the best. So, we’re seeing them talking about different methodologies and how they can use this as part of their workflows as criminals. And then you can see them talking as well about kind of the different services that are out there. So, the bottom one’s very hard to see, but they’re talking about Grok. It’s not just ChatGPT, they’re talking about a lot of the other kind of AI services that are out there as well. This is just to show that, you know, the same way we’re, you know, having this webinar and talking about uses of AI and how AI can help us in our workflows and our investigations, the threat actors are talking about that too. And we are seeing that kind of pop up on forums.

We have also seen AI being used as part of attacks. I’m not going to delve into this hugely because it’s not really kind of on the dark web side of things, but this is just kind of an article highlighting how Grok AI was used to bypass app protections and spread malware to millions. We are seeing more and more of this. We are seeing, you know, ransomware strains being developed using AI or having kind of some AI implementation as part of them. And I think this is something that we expect to rise as, you know, the technology becomes more widely used and I assume continues to increase in sophistication. We are going to see a lot more of these types of attacks and it is going to become an attack vector in cyber as we kind of move on with that. I just kind of wanted to mention that as a side.

I’m going to dive in now into some specific examples of how this is being used. Starting off with criminals, I’ve kind of already touched on this, but we’re seeing it very much in phishing, social engineering attacks, romance scams, and also for defeating KYC to get into kind of financial fraud.

We’ll go through those in a little bit more detail. This is an example of an advertisement on Telegram. This is a service where they are offering an AI face builder. It will create a unique face and then you can use that for whatever you need. So, this is being used, we’ve seen this being used for defeating KYC.

You can see you’re swapping faces on photos and videos so that you can look like you’ve got your ID card. For those organizations where they ask you to take a picture of yourself with your ID, this is kind of helping them to kind of combat those checks and balances that are put in place. But we’re also seeing these kind of face builders and generators being used in sextortion as well, and I’ll kind of touch on that in a bit. But you can see kind of how this is part of the business that they’re offering. You can get a tutorial; they give you kind of free services to start off with to test it. You can do bulk processing and purchasing credits. So, it is kind of interesting how they’re using this going forward.

This is another discussion on a dark web forum talking about fraud GPT, but I highlighted it here because it’s saying this is what it’s going to help you do. It’s going to help you write phishing emails, develop malware, forge credit cards. These are the types of activities and crimes that are being posted as AI will be able to help you to conduct these types of crimes.

This is also another news article that I came across in terms of them using deep fakes to spoof a celebrity. The individual that was spoofed is an actor in a US soap opera.

His videos were generated and being sent to a woman based in California, and he was able to scam several thousand dollars out of that individual by asking for money and kind of creating a relationship with this victim by pretending to be this famous soap actor.

This one I don’t think did have a romance angle, but this is very much how romance scams can be operating with the use of AI as well in terms of them generating fake videos of fake individuals or pretending to be a celebrity, impersonating their voice, but obviously getting them to say things that they would never say and targeting individuals to get them to send them money, usually via cryptocurrency. And there has been a huge increase in this, and a lot of celebrities are being targeted in terms of their likenesses being used via social media to target victim to get that financial fraud out of it. And I don’t actually have the video to play here. This is a screenshot. But if you see any of these videos and to Jane’s point about like how do you identify this information, they’re very realistic. It’s very difficult for people to identify that this might not be real, especially I think for some of those victims that might be more vulnerable and not as savvy to be open to this technology, but also these kinds of attacks. 

These are some more advertisements from Telegram, but this is more related to social engineering services that they’re providing. So Purple on the right, you can see that they’re doing call protection, but they’re generating ultra realistic voices via AI. They’re offering different tones, male, female, neutral. And they’re using these voices to spam people basically to have these calls to try and get people to hand over their money. They’re providing this as a service to people so they can use these different voices to scam unsuspecting individuals. So, you know, it isn’t, I think when we think of phishing, we tend to think of emails or maybe SMS messages, but I think more and more phone or video messages are going to become more of an issue with the advent of AI.

On the left-hand side as well, this is kind of more of the business email compromise where they’re kind of talking about all the different ways that they can make sure that an email campaign would be successful, including AI powered optimization. And I think to go back to, you know, it’s the same way, you know, that we’re using this in our everyday life, the criminals are using it. I mean, you could have an SEO marketing company that’s kind of saying the same thing to businesses that want to kind of advertise their services. But from the threat actor side, if you put the different slant on it, they are using AI and customizing email addresses to make sure that you can spam people more successfully and conduct those financial crimes. It’s interesting how it’s being used in a similar way, but, you know, with a lot more malicious intent than the rest of us would be using it.

Moving onto sex related crimes, I think this is a really important one and one that people don’t always necessarily think of or sometimes think that there isn’t a victim if it’s AI generated, but that’s definitely not the case. I think the main areas where we are seeing AI being used is child sexual abuse material, CSAM, and generation images relating to that, Human Trafficking and Sextortion and Romance Scams.

To highlight the AI generated child sexual abuse material, you know, Europol have made arrests quite recently related to this and put out information about it.

But a lot of people are using AI to generate fairly real looking videos depicting CSAM. And there are still victims in this because the individuals that are watching this material may go on to also target children in the real world, but also, they need to train these models and create these images based on something. And so, there are children that are still being victimized by this kind of activity, and it is making it more prevalent.

It’s something that I think is really important that we are able to stop. And it is becoming, you know, more and more sophisticated. And I think this quote from the IWF, Internet Watch Foundation, is probably a little bit out of time now, but saying that, it has progressed at such an accelerated rate that they’re very realistic examples of videos depicting this. And I think we are seeing those very realistic videos and images being distributed across the dark web and other sources at this time. It’s definitely something that obviously we need to stop.

Human trafficking, I think people might not necessarily equate AI with human trafficking and see exactly how it’s working. This map actually just shows human trafficking victims across the world. It isn’t specific to AI, but I think I wanted to highlight kind of how much of an issue human trafficking still is. This is from Interpol.

But also, in terms of how we’re seeing AI, it’s being used to generate fake job advertisements. So, kind of as part of that initial phase of the human trafficking of enticing victims in and generating material that’s going to make them think there’s a believable job or there’s kind of a believable activity that they want to be involved in and kind of suckering them into that whole industry. It’s also being used to bribe people in terms of generating false sexually explicit images for victims of human trafficking and using that to really kind of enforce the activity that’s going on.

And that brings us in the same vein to sextortion. In a lot of cases, AI is being used to generate images of individuals and then extort money from them. So basically, creating nudes or sexually explicit images of individuals, it’s not them, it’s AI generated, their face has been put on it, but threatening to share those images and say that they are real with their friends, with their family, with their colleagues. It’s really prevalent against young people using social media vectors, so things like Snapchat, Instagram, things where images are shared quite a lot but it is targeting people of all ages and it is targeting both females and males and it’s really you know an awful kind of practice there have been noted suicides of people that have been targeted by these types of sex distortion attacks. So again, it’s going back to how can people identify that these images aren’t real you know the victims feel that they look so real even though that they know that they’re not because they haven’t shared that material with them, that they’re so worried about this, that they are paying these people. And there are, unfortunately, fairly well-organized criminal groups that are kind of doing this on a rotation basis, trying to kind of build up these relationships with these individuals generating these images and getting this money from them. It is becoming a real huge issue, as I said, particularly among the younger generation.

We’re also seeing AI being used by terrorist organization and extremist groups. It’s primarily being used, I would say, for Propaganda, but also Disinformation as part of those propaganda campaigns and campaigns and putting a lot of that information out there. We’re also seeing them using it for Translation a lot to make sure that they can reach individuals in multiple countries to bring them into their extremist beliefs and also generating images, again, with propaganda and disinformation in mind. But some examples of that, this is taken from an ISIS chat group. You can kind of blurred out in the back of the ISIS flag, but it’s an AI-generated image on an article about building bombs. So, part of their propaganda, part of their education of individuals, they’re using AI to make this look kind of more believable and kind of draw in individuals. So that’s kind of one aspect we’ve seen.

This is another one that kind of looks you know, if you don’t know what to look for, but it’s Iranian terrorists claiming that they crashed a plane into Disney World in Anaheim. You can see the Disney castle in the background and the crash plane. I would argue the plane isn’t that realistic because planes don’t tend to crash backwards. But it’s highlighting that propaganda. It’s well kind of incentivizing people to go after these kind of targets. They’re putting ideas and people’s minds using AI of ways in which you could, you know, go about conducting attacks. And that’s something we need to be very mindful of.

This is a video that was put out with Hamas. So, Hamas talking, again, this was not a real video, but it looked like a news conference of Hamas leadership talking about the Israeli army and how they wear diapers because they’re stationed for so long and that led to generated images of you know Israeli forces wearing diapers which in some cases look quite authentic.

I mean I think most people would see this as a joke but obviously there you know there can be more concerning ways in which people about providing these kind of generated images. But to the point where they even had a TikTok video that was going around that went viral where an Israeli commander was talking about the nappy. So again, they were impersonating him and getting him to speak as if it was him to kind of try and back up the story that was put out there. And this is obviously all put out there to undermine Israeli from Hamas terrorist group. So, you know, it’s that disinformation. This one, obviously, I think most people would not believe, but they are putting things out there that are much more believable and it’s making it very difficult for people to understand what is real, especially in these times of kind of conflict.

And with that, I’m going to stop talking and hand it back to Jane.

Jane: Thanks, Erin. What you’ve demonstrated there in that kind of collection of examples is just the fact that, you know, AI, unfortunately, can increase the sophistication of a lot of bad actors really quickly. And so that can make our jobs, of course, really challenging.

So, we won’t necessarily do the poll now in the interests of time, but I’ll still talk through it because I think it’s interesting in the fact that, you know, when you reflect on these kinds of questions yourself, thinking about your own environment, whether, you know, your biggest challenges relate to some of the synthetic media that Erin sort of spoke about or perhaps it’s the scale of all of the things that you’re challenged with and in some cases even organizational readiness and maturity can pop up to being a big challenge for some practitioners and workplaces. But I think what is really interesting just to kind of emphasize your point there, Erin, is that this question really is one where the risks are kind of symmetrical in the sense that the same capability that helps us as practitioners, investigators, analysts, whatever in terms of automation and language generation, pattern recognition, it’s exactly what the threat actors are going to be using against us. And so, there’s an absolute need that we ensure that we have high levels of literacy when we’re kind of engaging in our work today. Because, AI itself, it’s not inherently malicious or benevolent, really. It’s what determines that is the outcome of its use and how well we govern it and verify and all of those kinds of things.

I think a lot of these are making things extremely difficult for practitioners and we can see a world where sometimes we might not we might simply not be able to verify whether something is true or not and that’s sort of the future that we’re looking at but at the moment we’re not quite there and so there are certainly some techniques that we kind of encourage you to consider Let’s have a look at the next slide, Erin.

I think one of the key things when at least OSINT combined when we’re talking about this challenge is that, you know, we really are talking about the analyst requiring stronger discernment, which references the fact that we acknowledge that AI gives velocity and capability in a way that perhaps, threat productors didn’t before have. But also, analysts must maintain this skill for validation and be the purveyors of veracity in as much as possible.

We think the most effective lens to kind of look at this is a multi-kind of modality kind of approach, if you like, that blends both traditional verification and analytical tradecraft with AI aware cues. And so, we acknowledge that this can be a difficult task, of course. Certainly, in some of those disinformation examples, Erin, that you provided, where analysts are going to be requiring to perform validation and verification, as well as potentially some really detailed content and metadata analysis. So, you’re adding on to your traditional analytical tradecraft tool sets around critical thinking and some of your analytical practices, you’re adding onto that some quite technical skills when it comes to sort of unpicking content and metadata analysis. But we think that it’s doable at this stage if you break it down. And so, we favor kind of practical steps and some guides for that process such as inauthentic content analysis maps which we’ve written blogs about that you can check it out on our website. And so, I’ve put some key examples there around anatomical artifacts and reverse retrieval and those kinds of things which of course are always going to be helpful. Providence Chain also super interesting for us when we’re kind of considering whether how something has proliferated online and where it was created and so forth.

But for me, I can’t get my head out of this space of the meta questions, and I think that’s got to do with largely my traditional intelligence training. And so, the questions that I always come back to in addition to some of these AI-aware cues are things like, “What would I expect to see if this were true?” And so that has me going to actually, look at some of the context, which is still super important to us. And the other question I like to ask when I’m considering the adversary is, “Well, what would my adversary need AI to achieve here – Would it be scale, speed or story?” And that really speaks to intent capability and, you know, the motivation factor, of course, which we always need they always need to maintain an eye on. But having the AI helping us out, as well as applying some of that human validation and verification activity is a real emphasis, I think, to ensure that the human remains in the loop. Really, we want our analysts to think critically, act ethically, and adapt intelligently alongside the machine that they’re working with.

There’s some available resources, all available to you, to download from the OSINT Combined website, and there are certainly more available. Let’s look at some key takeaways.

I think what we’ve been able to demonstrate today as a base of sort of numerous examples across different kinds of crime types and actor groups that absolutely adversaries have access to AI and they’re not afraid to use it. And they’re certainly, experimenting with it just as we are at the moment too. Human in the loop remains essential. We’ve discussed that. And there’s an importance there for layered verification. So not just trust in one modality over the other, but kind of really thinking quite deeply about, well, what are the different kinds of ways that I can speak to reliability, relevance, credibility, and consistency when I’m looking to verify information. And as a bonus tip, always thinking about, hey, some of these deep fakes, particularly the voice synthetic media that you identified, Erin, are becoming pretty sophisticated. And so, there is an element here to prepare for the inevitable in terms of preparing your organization to harden against impersonation and to prepare a playbook if you like about what happens if. And so, I think we can’t really avoid that.

I can see we’re at time. Kathy, I wonder if we pass to you and more than happy to take questions offline and respond to people if there are any, but over to you for final words.

Kathy: Sure, we do have a couple of questions that have come in. If you two want to go ahead and address them now, we can address the two that have come in and if any others come in, we can address those offline later if that would work.

Jane: Yes, I think that’s fine for us. I can see Erin nodding. So please, please fire away. And of course, if people need to drop off, they can, and they’ll received the recording.

Kathy: Sure. So, the first question is, how do you brief leadership when you suspect synthetic media but can’t prove it?

Jane: Yeah, we get asked that one quite a bit, Kathy and Erin, you might have thoughts on this too, but I think I still go back to this factor about you need to sort of explain confidence, not just certainty, to the leadership group and so that means about being really transparent about what you do know and what you suspect and what’s unverified and being open to being contested about that too. So, you know you have to sort of be professionally honest here. So, we want people to sort of show you know their reasoning how they came to a particular conclusion, could be you know to identify the anomalies and maybe even network behavior or some kind of thing that was flagged during the analysis. But I think it’s also really useful for leadership to sort of say, hey, if this is genuine, then here’s the impact, because that’s essentially what the leaders need to know is the impact so that they can act accordingly. And then vice versa, well, if it’s fabricated, here’s what, you know, we know that the adversary is trying to achieve against us. And so, both of those things are actually really important, I think, for all leaders to know about.

Erin: Yeah, I just add to that. I think I agree with what you’re saying, Jane, but I think just transparency, I think, you know, outside of AI, when we’re talking about intelligence and the things that we find, just because something is low confidence, or, you know, we haven’t been able to verify it with a lot of other sources, doesn’t mean it’s not something that should be shared and should be part of the intelligence package. So, I think it’s just making sure that we’re using those traditional kind of ways of how we do assessment and not doing anything different just because it’s AI.

Kathy: Great, thank you both. And kind of piggybacking on that a little bit. What’s your protocol for documenting AI’s role in your findings?

Jane: Yeah, I mean, I think it’s really important, Erin, and you were just sort of touching on it then, weren’t you? Like, just because we have AI now in the mix doesn’t mean that we’re going to be throwing the baby out with the bathwater when it comes to analytical and assessment tradecraft. All of that still applies, but we need to be professionally honest and transparent about when and how AI is being utilized throughout the process. And so actually, you know, in the US, there’s some strong guidance around this point for the US intelligence community, but OSINT Combine has actually, produced a best practice guide for citing AI to just for anyone. So, don’t have to be intelligence community, could be private sector, but really it’s about accountability through transparency is essentially it. And so, you want to be pretty transparent about how AI was utilized as a part of your assessment, what tasks it supported, where the output was validated, and where the human analyst made the final judgement. So typically, I see almost like a short provenance note or some kind of disclaimer in the methods section of analytical reporting now, that’s not uncommon. But we really need to be transparent to your point, Erin, earlier.

Kathy: Great. Thank you. That is all the questions that have come in to us right now, but we do have up on the screen contact information for both Jane and Erin, if anybody has further questions, or they’d like to reach out to us.

And I’d like to thank Jane and Aaron for an insightful discussion today. As a reminder to all of the attendees, we will be following up via email with a link to the recording and other resources. And we thank you all for joining us for this webinar and we hope to see you all again at another webinar in the future. Thank you.

Jane: Thank you.


Have questions? Contact us.

What is a DDoS Attack?

October 09, 2025

Cybersecurity might as well have its own language. There are so many acronyms, terms, sayings that cybersecurity professionals and threat actors both use that unless you are deeply knowledgeable, have experience in the security field or have a keen interest, one may not know. Understanding what these acronyms and terms mean is the first step to developing a thorough understanding of cybersecurity and in turn better protecting yourself, clients, and employees. 

In this blog series, we aim to explain and simplify some of the most commonly used terms. Previously, we have covered bullet proof hosting, CVEs, APIs, brute force attacks, zero-day exploits, doxing, data harvesting, IoCs, and credential stuffing. In this edition, we dive into DDoS attacks.

DDoS is an acronym for Distributed Denial of Service Attack – a malicious attack on a network that is executed by flooding a server with useless network traffic, which exploits the limits of TCP/IP protocols and renders the network inaccessible. This excessive traffic prevents legitimate users from accessing the service, effectively causing a “denial of service.”

The frequency of DDoS attacks are constantly on the rise. Some reports estimate that there were approximately 2,200 DDoS attacks every hour in the first three quarters of 2024 – a staggering 49% QoQ increase in DDoS attacks and a 55% increase YoY. The United States ingested more than 40% of DDoS attacks, followed by Germany, then Brazil, Singapore, Russia, South Korea, Hong Kong, United Kingdom, Netherlands, and Japan.

While the average length of a DDoS attack is under 10 minutes, the financial damage that it can cause to the attacked can be very damaging – the average cost per minute of downtime is $22,000. On the flip side, attackers can rent tools online to launch an attack for as little as $5 an hour.

How Does a DDoS Attack Work?

A DDoS attack leverages a large network of botnets. Botnet can be defined as an army of compromised computers or internet of things (IoT) devices that collectively utilized for a malicious purpose. This flood of traffic leaves the device unable to be used by legitimate users. Motivations for committing a DDoS attack range:

  • Extortion: Attackers demand a ransom from the target to stop the attack.
  • Hacktivism: Attackers use hacking techniques to achieve a political or social agenda, such as protesting against organizations, governments, or ideologies they disagree with, raising awareness on a political agenda, or exposing corruption.
  • Business Competition: A business might launch an attack on a competitor to disrupt their services and gain a competitive edge.
  • Cyber Warfare: Nation-states damage another nation’s digital infrastructure, information systems, or critical services for military or political objectives.
  • Distraction: A DDoS attack can be a smokescreen to distract security teams while attackers conduct a more sophisticated breach, such as stealing data.

Esports and Gaming

Esports platforms, streamers, and tournaments have become prime targets for cyberattacks. The reasons are simple: high visibility, massive online audiences, and often, poorly secured infrastructure. 

report from Control Risks explains that “the sheer popularity of esports, combined with lax security protocols in some areas, makes them an ideal target for DDoS attacks, credential theft, and extortion.” In fact, the report states that over 37% of all DDoS attacks are directed at online gaming and esports platforms. This makes gaming and gambling the industry most targeted by DDoS attacks.

These aren’t hypothetical threats. In recent years, major tournaments have been halted mid-stream due to attacks, players have been forced offline during crucial matches, and attackers have used ransomware to hold tournament servers hostage.

UK Councils

One group of organizations which has been increasingly targeted by ransomware groups and other threat actors is UK councils which are the local level of government in the UK.  Recently hacktivist groups which are associated with countries involved in conflict such as Russia, Ukraine, Palestine, Iran and Israel have been known to conduct DDoS attacks targeting council websites. The image to the left shows proof of DDOS against London Borough of Harrow from Palestinian affiliated hacktivist group which caused temporary website outages and service disruptions across multiple local councils including Blackburn with Darwen, Exeter, and Arun District Council. These attacks were politically motivated in response to the UK’s support for Ukraine and carried out by hacktivist group NoName057(16). 

Hacktivist Group: Dark Storm

Earlier this year, X suffered multiple worldwide outages. The hacktivist group Dark Storm has claimed responsibility for the DDoS attacks which caused the outages. Specifically, the group made posts on their Telegram channel the same day the attacks took place and shared screenshots from check-host.net as proof of the attack. Tens of thousands of users were impacted by the outages. 

A month after Dark Storm caused the outages of X, the notorious hacking forum BreachForums went offline, this time possibly as a result of a Distributed Denial-of-Service (DDoS) attack. Dark Storm, once again, claimed that it was behind a DDoS attack against BreachForums. The group shared a Check-Host.net link in its Telegram channel which showed that the hacking forum was down in over two dozen countries.

As always, DarkOwl recommends practicing good cyber hygiene in order to prevent an attack before it happens if at all possible. While attackers are constantly changing their TTPs (tactics, techniques, and procedures), there is no single foolproof way to prevent a DDoS attack, a multi-layered approach to protection is recommended. Every organization should have a DDoS Response Plan and keep it up to date (who to contact, what systems to check, etc), know the normalities of your network so you can know when patterns or activities look off, maintain good cyber hygiene by keeping all systems, software, and applications updated with the latest security patches, and increase your system bandwidth so if an attack does happen, you have more capacity to handle the flood of traffic and stay online.


Keep up with DarkOwl. Follow us on LinkedIn.

[Webinar Transcription] New Regulations and What They Mean for Your Supply Chain

October 07, 2025

Or, watch on YouTube

This fireside chat, “New Regulations and What They Mean for Your Supply Chain,” features legal expert Rich Hanstock and DarkOwl’s Lindsay Whyte as they unpack the evolving cybersecurity regulatory landscape across the UK and EU. The discussion explores the shift toward mandatory, continuous, and ecosystem-based compliance, highlighting key regulations such as the EU Cyber Resilience Act, NIS2 Directive, and the UK’s Cyber Security and Resilience Bill. With increasing supply chain complexity and heightened accountability, the speakers examine how organizations can proactively manage risk, leverage threat intelligence, and prepare for upcoming compliance deadlines—all while navigating the broader implications for cybersecurity professionals and industry resilience.

NOTE: Some content has been edited for length and clarity.


Kathy: And now I’d like to turn it over to Lindsay, a Regional Director for DarkOwl and Rich Handstock, Barrister and founder of pwn.legal to introduce themselves and start our discussion.

Lindsay: Thanks very much, Kathy. The aim of today’s session is to shed some light on the regulatory landscape as it relates to cybersecurity practices in the UK and Europe. And obviously from DarkOwl’s perspective, we’re always keen to share how our technology and ever-evolving collection approach meets these regulations. But today, it’s important to spend a bit of time setting the scene, I think, and stepping back a little, because there’s a few things at play here which affect many more professionals than just those involved in DarkInt collection and threat intelligence.

So perhaps, Rich, I can start by asking you as a specialist, legal professional in the world of cybersecurity and data privacy. What is the regulatory landscape right now with regards to cyber resilience?

Rich: Thanks, Lindsay. I think it’s quite an exciting time to be talking about this. Jurisdictions around the world, it seems to me, are converging around this idea of the challenges and risks of cybersecurity being shared, rather than seeing responsibility concentrated in states or in a few larger kind of critical infrastructure type organizations. Take CrowdStrike, for example, events like that surface into the popular imagination, the kind of sheer extent of hidden dependency on technology, many of which are not readily understood by or foreseeable to the average person and the systems we’re seeing vulnerabilities in ways that are not necessarily well understood either. But what is understood is that when it goes wrong, even for one company, even if that company isn’t currently a household name, that incident can have ramifications for a vast number of people outside that one organization. And so, if that keeps happening, and I think we have to assume that it will, that has the potential to erode the sense of security that many of us at least are fortunate to depend upon. Some of us maybe take for granted. And when that happens at scale, it can become a national security issue. But the challenge is just so huge. And fundamentally, I think governments are realizing that the cybersecurity challenge is too big, too great, too rapidly evolving for states alone to solve.

So, for the last few years in the kind of cyber policy space there’s been a discussion around what’s been termed a ‘whole-of-society’ approach to cyber security and this idea that partnerships not just between states and those key kind of private sector organizations that are deeply embedded in kind of infrastructure of the internet and so on, but critically between cooperation within the private sector, between and across markets and sectors and jurisdictions, with the focus really being now on assuring business continuity, data security and integrity, so as to project confidence to end users and to other businesses that everyone’s working together to help to keep the lights on globally.

So, to answer your question, I think it’s the recognition in policy of a need for that whole of society, everyone working together in partnership approach to cybersecurity that is driving this kind of shift in the regulations towards focus on the supply chain, ensuring private sector organizations of all shapes and sizes are taking the threat seriously, not just to their own backyard, but looking outward to their dependencies in their supply chain as well. There’s a sense, I think, that regulators need to have the power to ensure that more organizations are thinking about business continuity and security with ever broader responsibilities and so on. But it’s all about enhancing our collective security.

Lindsay: Yes, I see what you mean. And what are you saying then is the sort of general direction of travel on that basis then?

Rich: Again, it’s broadly the same idea, right? More accountability for cybersecurity throughout what are life cycles and throughout the supply chain. Whilst there is alignment around that central idea, national implementation is creating complexity for multinationals. And I think that there are effectively three kinds of big handful, big three, three big key shifts that I want to talk about.

First of all, we’ve got the shift from voluntary standards to kind of mandatory standards, at least for those who are in scope. Historically, cybersecurity standards have been kind of largely self-regulated. You can get ISO certified, adopt various frameworks, get your cyber essentials and so on. All of its really good practice. Sometimes you see those kinds of certifications as being conditional upon kind of eligibility for a contract. It’s kind of a compliance requirement. But fundamentally, they’re voluntary. And what we’re seeing now is regulators kind of saying, well, if you’re in scope of our regulatory powers, that’s not going to be enough. You need to have these as kind of a minimum baseline. And that’s why we’re seeing kind of legal duties of care being put on kind of manufacturers and operators, as well as just the critical infrastructure providers. That’s shift one, voluntary to mandatory.

The second shift is a move from point in time security and assurance, to more continuous monitoring and assurance, which is kind of linked to the first point. It’s not performative, or supposedly, it’s not just performative. You need to be taking this kind of focus on effectiveness and outcomes rather than just ticking boxes. So, for example, under the CRA, the EU’s Cyber Resilience Act, you don’t just certify a product is secure when you launch it, you’ve got ongoing obligations throughout its life cycle. So, if three years after release of vulnerability emerges, and it’s being exported, and you become aware of that, you’ve got specific notification timelines that are pretty sporty, actually, to the relevant authorities. And that fundamentally changes what compliance looks and feels like inside an organization. It’s not just okay, we’ve got a stiff cut on the wall, big tick. It’s a continuous operational responsibility. That means that you have to understand the threat environment as it evolves. So, voluntary to mandatory, point in time to continuous.

Thirdly, from perimeter thinking to more ecosystem thinking. It’s this idea that traditionally compliance is focused on your backyard, within your fence, your organization’s security. These new regulations effectively make you responsible to an extent for understanding your own supplier’s security. And in some cases, your supplier’s suppliers, this kind of idea of nth-party security, where does it stop, you’re now accountable for risks that you might not even have visibility into at the moment. There’s kind of a question about underwiring your ability to discharge your own responsibility by getting insight into what your suppliers are doing. That’s part of the challenge in effect. And critically, the penalties of getting bigger and sharper teeth, you know, like 15 million euros, two and a half percent of turnover for CRA, that really changes the conversation in the boardroom. And we’ll hopefully empower CISOs and certain people who are responsible for compliance in this space to be stepped up and listened to and maybe have more budget than typically they’ve had previously.

Lindsay: That’s such a good point because there are now just these endless strings of supply chains in this day and age. Why do you think these changes are happening then?

Rich: Well, I think primarily it’s the instance that I mentioned. We can list them off all day, SolarWinds, CrowdStrike, JLR, MLS, these weren’t necessarily isolated attacks on single companies in terms of the way that they were, that the impacts were felt, these were supply chain compromises that kind of cascaded across many people, many different organizations and I think we have seen regulators watching companies with quite sophisticated security programs getting breached because of vulnerabilities in third party software that they maybe didn’t have any or enough visibility into. That goes back to the point I made a moment ago about perimeter-based security, just not really working when the threat enters through your supply chain. We’ve also got because of that kind of cascading effect, the kind of sense of market failure, where cybersecurity incidents are what economists might term a negative externality, right? When a product is insecure, that it’s not the manufacturer alone that bears all the cost, customers suffer the impacts of breaches, critical infrastructure is disrupted, but the manufacturer’s liability might not necessarily capture what is regarded by the sort of person on the street as kind of being fair. The idea is that if markets aren’t naturally optimizing for security because of the externalization of some of the cost, regulation; there’s a case for regulation stepping in to correct that market failure. I think regulators are trying to use the law to internalize those costs to kind of make manufacturers and operators bare the true cost of insecurity.

Which actually leads me onto another important point, which I think is often overlooked in this space, which is the insurance market. This has been a lot of conversation about this, around the JLR incident. Insurers, I think, have historically struggled to price risk effectively to understand the risk because there’s no kind of standardized way to assess security practices across supply chains and I think without that without baseline security standards there’s a risk that the risk transfer mechanism it kind of breaks down. So, look at the discussion around insurance up to JLR, would it even, would the insurance that JLR was criticized for not having picked up even have been sufficient? Maybe not, right? And to the extent that that reflects a gap in the market, I think we’re going to see the insurance market mature, partly as a consequence of these regulations, partly as a result of the incident and the discussion that is now going on about it.

It’s not just about preventing breaches is my point. I think these regulations are also about creating more predictable risk environments so that insurance markets, ultimately capital markets, can function more effectively. So, without that predictability or that ability to understand what’s going on in the supply chain, where the dependencies are, where the vulnerabilities are, there’s a risk that the digital economy is more unstable than we would like it to be.

Lindsay: And on that question of the new regulations, could you talk a little bit more about what they are saying?

Rich: Sure. I mean, there’s a lot of them. I know you’ve– I think we’ve got a slide. If you could call that up, that’d be great. I’m not going to try and cover all the detail now, but I think there are kind of two or three main tracks.

We’ve got the EU Cyber Resilience Act, which came into force in December last year. The reporting obligations kick in in September 26th, full compliance by the end of ’27. NIS2 alongside that came into effect in 2023, which that was really about expanding critical infrastructure obligations. And a lot of the conversation in the UK now around the Cyber Security Resilience Bill is about extending original NIS regulations to managed service providers, bringing a big chunk of the supply chain into the scope for the first time. We’ve talked at the beginning about the big handful shifts, and we’re drilling down now into some more of the regulations and what it is that they actually say.

I think Question Zero the clients always ask is, am I in scope? I know scope is expanding, there’s a lot of talk about the fact that scope is expanding and the greater burden that therefore imposes on people. That’s obviously an interesting and important feature of these regulations, but it tends to drive the conversation around, while there are some new regulations coming, how do I avoid them or minimize my exposure to them? Obviously, that’s important to understand it, but I always advise clients that the conversation doesn’t stop there. So, kind of my prediction is that the requirements that each of these frameworks bring will over time become market norms, to the extent that we could see those requirements invoked by analogy, like in private litigation, even those who are outside the scope of regulatory jurisdiction. So, if you have an obligation, say in a contract, to take reasonable steps or perform due diligence, I think we’re going to see a failure to take steps that in some sectors are required by regulators, potentially being deployed against people who are otherwise out of scope in litigation or at least in negotiation around a commercial contract or following an incident. I think it makes sense that whether you’re in scope or not to understand what more you can do to understand your exposure to risk.

Big handfuls, if in the EU manufactured importers, you’re looking at the Cyber Resilience Act, so if you’re making or importing products with digital elements into the EU, so that could be software, IoT devices, anything connected, you’re going to have obligations at sort of three main stages. Before the market, you’re looking at being able to demonstrate security by design in software and hardware, risk assessments, documentation, so on. At market, the things like CE marking, conformity assessments, kind of build on the pre-market stuff. You’re looking at creating what’s called software builds of materials, or SBOMs, in a particular, format that need to be given to regulators on request to goagain, to show that you understand where the dependencies are in your software.

Then the big shift is throughout the lifecycle of the product, right? You’ve got a monitor for vulnerabilities in your product throughout a support period, usually five years. And if you become aware of a vulnerability that’s actively exploited, maybe through responsible disclosure or otherwise. You’ve got to notify National Authorities and ENISA within 24 hours, a detailed report within 72 hours, final report 14 days. This is pretty quick in the context of an incident, right? And critically becoming aware can include constructive knowledge. So, if it’s publicly available, you could be deemed to know. So, you need to be monitoring what’s going on, have a vulnerability disclosure policy and be engaging responsibly with those who make those responsible exposures, that’s a bit of a bugbear of mine.

NIS2 then, if you’re kind of an essential or important operator in say energy, transport, banking, health, infrastructure, those kinds of sectors, in the UK this is expanding to MSSPs, you’re going to have similar kind of key obligations around risk management, including understanding your supply chain and your exposure to risk there. Again, incident reporting, early warning within 24 hours, 72 hours, detailed notification, final report within a month. And so, you can see like an integration point here where if you’re an operator within this too, you’ve probably got to verify your supply is compliant to the CRA.

So effectively, what we’re seeing is a cascading accountability. So, you can’t just take your vendors at their word, you know, you just get a warranty that says, “Oh yeah, we comply with all of this stuff “and it’s all fine.” You actually need ongoing visibility into their security posture as well as your own. And it makes sense as well to make sure that you have the contractual levers, but critically the relationships in which those levers might be pulled to ensure that you’ve got the right information available to you and to demonstrate that you have the right information if a regulator comes knocking, as well as the competence to interpret that information. So, this is about investing in relationships, contracts, people, so that you can ensure that you’re able to assure a regulator or a supplier that you have the visibility that you need into your organization, but also those on which you depend. It’s really, really quite broad.

Lindsay: Yeah, and I guess, bringing these two subjects together, you know, taking that spider’s web of supply chain now, can companies in your opinion rely on, you know, the government for all matters relating to threat intelligence, you know, is it sufficient to rely on the government and, you know, government punishments and that sort of thing to prevent threats in future?

Rich: No, so I think the short answer is no. Why threat intelligence with government on its own isn’t enough and I think that is by design, going back to my point earlier about kind of reducing perceived dependence on government to mitigate these risks. And I think effectively the regulations are structured to make sure that you are taking responsibility for your own security, your own company’s security, as well as that of your supply chain, and to make that make commercial sense. That’s the point about regulation to correct perceived market failure. From a big broad policy perspective, I think that reflects a broader global shift in thinking about commercial resilience as a component of national security, in which we all play a part, right?

So again, come back to JLR. People are asking now, what is the proper role of the state when an incident hits, right? Kind of like the conversation we were having a few years ago, about banks and fraud, right? Who should bear the cost of a hostile act? and what protective measures need to be in place to then fail and how severe do the impacts need to be before somebody other than the victim, typically in that dynamic, a consumer, intervenes or maybe even the state intervenes to kind of swaddle or mitigate the loss.

And my sense is that these regulations are kind of the beginning of a clarification of what the role of the state is in a cyber-attack, a cyber incident, like it’s more about the state setting standards and enforcing them, but giving advice about how to meet those standards without providing the kind of operational security service at scale for individual companies or people in the supply chain. That’s everyone’s responsibility, not just the state. It’s that whole of society approach again, right? And again, so states can help with things like quality assurance to a credit, cyber security solution providers. They can help with setting what cyber essential should be, but that sort of thing. But the day-to-day security, your backyard and your competitors, that’s on you, that’s the clear message. And you can kind of see that as and when a critical mass kind of adopt that mindset to the extent that that hasn’t already happened whether compelled through regulation or voluntarily, the idea is we should be more secure because there is this natural surveillance within the market around threats, but that doesn’t mean you can out source it, you need to be looking at our own and those on who your continuity depends.

Specifically on threat intelligence, I think government threat intelligence is clearly invaluable. The NCSC in the UK, CISA in the US, ENISA in the EU, it tends to provide the quite strategic contextual information about kind of nation-state level threats, because that’s naturally where the focus is, kind of big vulnerability disclosures, maybe some sector-specific guidance. But because it’s operating at that macro level, it’s not enough on its own, that’s why I say it is not enough, because you look at the what the CRA and NIS2 require you to be looking for; they are requiring you to monitor for threats that are really quite specific to your products and your supply chain and that means effectively if your components have got vulnerabilities, if your credentials are circulating on criminal forums, if your employees or contractors or suppliers are vulnerable or being targeted that kind of granular operational intelligence is on you to collect and understand and interpret and assess. Government just can’t provide that kind of granularity as a service to all industries all of the time. They don’t know your specific bit of material, your supplier relationships, your attack surface. And there’s always a bit of a lag, right between the filtering down of government intelligence to, to public advisories, right, by which time the pace at which the threat is evolving, especially with AI and so on, its attackers have probably moved on a little bit. I think that’s always part of the challenge with public advisories. And I think governments accept this, right.

We’re seeing regulatory guidance that explicitly encourages companies to use commercial threat intelligence, right? Look at the British lawyer, look at the NCSC, for example. The NCSC’s guidance on supply chain security recommends continuous monitoring using multiple intelligence sources and seeks to equip companies to kind of understand what the market is offering in the threat intelligence space and the continuous monitoring space in order to make an informed choice for their organization as between what could be quite expensive, in some cases, and quite technical, different products. So yeah, that I think is the role of government. It’s helping you to make choices, but it’s the making of those choices that you still need to do in order to get the information that you need. So, if you’re relying solely on government threat intelligence, you’re probably not going to satisfy the appropriate procedures standard in the regulations. You need to be demonstrating proactive continuous monitoring, tailored to your risk profile. Loads of vendors do that, some are better than others.

I think fundamentally these regulations are trying to align compliance incentives with actual security outcomes. The idea is that we move away from this box ticking compliance much more towards actually improving your resilience to cyber attack, which is kind of in your commercially interest anyway, right? But it’s also about making sure that you can demonstrate if you’re audited or if a supplier comes knocking that you are doing all the right things, as well as actually using the intelligence in the right way.

Anyway, I’m conscious I’ve been talking for quite a long time, and I’ve got a few questions for you, Lindsay, if I may, about your experience at DarkOwl. So, reflecting on your experience with your clients, people who use your products, what kind of common challenges are they facing? Why is supply chain security important to them?

Lindsay: There’s a few reasons. I think one is best explained by the way that cloud technology is creating what could be described as a logarithmic network effect, the sort of spider’s web that I described earlier, where the ease of integrations between technology, which is a brilliant thing, causes an enormous reliance on external parties and risks from the supply chain. I mean, as you mentioned earlier, and I think it’s worth repeating. We know that last month that the CEO of Sophos, an enormous European cybersecurity company, Joe Levy, he summed it up nicely by saying that third party risk management is now “Nth Party Risk Management” that deserves being repeated, given the endless string of supplies involved in the provision of a product or a service.

And it’s not just B2C end products. Most B2B products on the infrastructure level now can’t escape a world of interdependence and over reliance on suppliers. We all thought data centers were that sort of the end of the supply chain thread where risks are more controllable from a sort of compliance perspective, etc. But you just have to look at the recent events with underwater sea cables in the Red Sea to realize that no one is safe.

And I think another big issue is the diversity of regulations as they relate to supply risk. If your supply chain is getting longer, so too is the certainty that some of those supplies are based in a different jurisdiction to your companies, and they’re probably more focused on that jurisdiction too from a compliance perspective. So, one of the regulations that you mentioned, the UK Cybersecurity and Resilience Bill, you know, that extends the current network and information security systems regulations to cover more ground, like you mentioned, managed service providers. And that is an enormous chunk of the cybersecurity supply chain for almost any sized company. So, not only are you contending with different suppliers, more supplies, but also, different countries and approaches to regulations in which those supplies operate.

Rich: Yeah, so thinking about those security professionals’ jobs, how are they impacted day-to-day by these regulations?

Lindsay: Yeah, I think that’s probably why we talk about people specifically working in the roles within threat intelligence and allied professions. If you take a look at the micro level, there are so many things to talk about even just within that category.

There’s a lot of things to consider. There’s the onboarding of third parties and all the checks that that entails. For example, you have within the lifetime of a third-party contract, the ongoing maintenance and technical debt, there’s the offboarding, the decommissioning phase, often done so often with less sort of support from cross functional teams who just want to get rid of the contract they have with this supplier.

There’s the ever-evolving world of application security too. But then, what about the consultants, in a world of outsourced services and staffing? You know, the people who have been working on this technology or so, have they got key cards on them? Then there’s that added issue of maybe areas that are blended with corporate security responsibilities. You need to account for those and stepping back even further. This is all in a world in which an information security professional probably doesn’t actually own the supplier relationship or even project manage the deployment. So, looking at all of these variables on both the job level for people working in security through to this macro level of nation state threats as you mentioned quite rightly in the complex and independent supplier applications and networks. It’s no surprise that some of the prevailing guidance is about how to take matters into your own hands as you ended with, because we can’t readily rely on the government to sort it out for us.

Rich: Yeah, I mean, there’s the inflection point right between what we’ve been talking about. New technologies and sort of big data, there’s more data out there swimming around, there’s got to be an opportunity there, right, to better understand these threats?

Lindsay: Yeah. We should probably talk about something positive, I think in all of this, because no doubt we’re all affected by the speed with which data can be crawled and fused in the threat intelligence sector. And yes, the explosion in supply chains means that there’s more ways for threat actors to get lucky, you know, business email compromise, service desk social engineering and beyond, you know, there’s a broader attack surface, meaning there’s need for more threat intelligence. And I think thankfully, there’s now a renewed attention you’re starting to see on threat intelligence and open-source intelligence that encapsulates everything from APT group reconnaissance to Twitter feeds and ways that we confuse this normalized data to give warning signals to information security pros. Alert fatigue is a problem especially, you know the moment you introduce responsibilities to monitor the supply chain and the wider regulatory sort of consequences for doing so the answer will inevitably lie in looking over the horizon, looking over the IT network and strategically addressing the issue rather than tactically. And technology can certainly help us do that.

Rich: What a neat segue into DarkOwl. So how has DarkOwl helped to equip information security professionals and others to navigate that increasingly complex environment and think differently about how it and get ahead of those risks?

Lindsay:  Yeah, when we were sort of thinking about this, we developed something called DarkSonar. So, DarkSonar is a risk score. So that when there’s been more activity surrounding a company’s domain and staff credentials on the darknet, essentially it would let you know when there has been more exposed than you’d normally expect. Breaking this apart a little bit, it gives a relative risk rating to an email domain that considers the nature, the extent and severity of credential leakage on the darknet to provide a company with a signal that acts as a measurement for a company’s exposure in advance of an attack. Because we know that one of the biggest threat vectors to this day is still compromised credentials for entry to a system. And we tested this metric against 237 cyber-attacks occurring between 2021 and 2022 and found our signal was elevated within the last four months as it says there, a prior to an attack for 74% of the attacks on organizations. And I suppose there’s three things going on here.

So, number one, it’s data-driven future warnings as opposed to alerts after the fact. Number two, it’s scalable to all domains and supplies included. In fact, Security Scorecard have endorsed this for us publicly for this reason. And then finally, it offers companies the ability to measure market benchmarks because their supplier may not know where their exposure lies in relation to other companies, especially in relation to government departments and local councils, for example, that sit side by side, but are actually not always sure as to what exposure level they should come to expect, especially if you can benchmark it against predicting breaches and ransomware attacks.

So, this is the way for you and your supplier to do just that. It’s one contribution we’re making to at scale help organizations look over the horizon at risks to their suppliers and by extension themselves.

Rich:  That predictive model sounds really critical for businesses that are trying to get ahead of a threat, right? And/or if you want to criticize someone else in your supply chain who didn’t get ahead of it. I was just going to say, did you have some slides that kind of demonstrated that?

Lindsay: Yeah, so if we look at a couple of examples that we threw together of sort of brand names just to make the data pop a little bit, you can sort of just visually sort of evaluate the success of using this sort of metric to predict an oncoming attack, just looking at Fujifilm and Robin Hood and their ransomware and data breaches that they experienced, respectively.

So yeah, I mean, this is something that we’re working on, we’re always looking for people to try it, to test it out. We like to be very transparent and understand where people are in their journey. This industry only works, threat intelligence only works if, you know, information flows both ways and we can certainly benefit from that. So, I mean, talking of that, I mean, perhaps we can turn to some questions from the floor because we’ve both been talking enough now.

So, Kathy, I don’t know if any questions have come in, but we can maybe answer any in case.

Kathy: Yes, thanks, Lindsay. We have one question that came in. It is, your webinar is looking at future trends, but from existing commercial customers, is DarkOwl seeing any trends today and how to leverage the darknet?

Lindsay: Good question. It’s funny because I was reading the UK government’s chronic risk report that was brought out last month and inside it, it details the ways in which so many risks are converging. So, I mean, the report itself was consistently emphasizing the interdependence of cyber risks, geopolitical risk, economic risk, even ecological risks. And one of the long-term uncertainties they officially were outlining is that the internet is going to become fragmented into sort of splinter nets, which basically means that, you know, the internet will fragment, thanks to regional policies, which will sort of isolate digital interactions and data access, creating sort of digital islands. And when you add that sort of thought to, okay, at the same time in the UK, we’ve got a, you know, explosion of VPN adoption since the online safety act and the sort of the risk we’re re-anonymizing the internet. Really, what that all means is a big trend we’re hearing from customers and partners is that they’re finally treating the darknet as an online space, just like the rest of the internet, which is needed for brand protection, situational awareness, just as importantly as they’re using the surface net for those same purposes.

Kathy: Okay. Thank you, Lindsay. We have another question that has come in: We are a mid-sized manufacturer, December 2026 feels close for CRA reporting obligations. What should we be doing now?

Rich: Yeah, it is close. I tend to advise my clients to kind of phase their preparation over the next 12 to 18 months if they haven’t started already. Their objectives really need to be to getting and making sure they’ve got the right people on it, first of all, sort of the right consultants, lawyers, to help to ensure preparedness. And the first thing to do, I suggest, really is to map the supply chain, get visibility into what components you’re using, who your critical suppliers are, what your relationships look like, what contractual and commercial levers you’ve got to get information about their exposure to risk.  Where you have a gap? How do you fill it?  Do you need to buy threat intelligence?  Do you need to buy access to data?  That kind of thing, then you assess your vulnerability monitoring.  Can you at the moment detect when your components have actively exploited vulnerabilities? Are you researching vulnerabilities entering software yourself. If not, I suggest certainly the former is quite a serious gap. Start looking at continuous monitoring solutions, get some quotes, start integrating. Then if you’re not already generating SBOMs, the builds and materials start building or procuring that capability, because that’s again foundational to compliance.

And then once all that’s in place, we need to look at incident notification planning. So, running tabletops, what does it look like in practice to meet that kind of 24-hour notification timeline as the case may be, who needs to be involved, who calls whom, at what point, who makes the decisions. And these could be quite big decisions, right? Like, do we pay a ransom? Like, whose job is it to decide and what’s recorded. Where is it recorded? Probably not on a compromised system, right? Whose job is it to record everything? And then test them, test the response procedures, document them, improve them, test them again. And it’s kind of acontinuous cycle.

What else? Reviewing supplier contracts, you probably need or will be asked to give kind of CRA specific warranties and indemnities, Make sure they’re fair and that there isn’t this kind of knee jerk, complete and utter transfer of risk onto you. Think as well about on the topic of risk transfer, think about insurance. Whilst I think I’ve said that the market is still maturing, if you’re not insuring, make sure the risk is at least surfaced and noted at the correct level. I think the companies that struggle in 2026, 2027 will be those who kind of see this as a bit of a last-minute compliance exercise trying to buy their way into like a performative compliance at the last second. Like not acting now I suggest is also a decision and you should think about where the accountability for that decision might lie and if you don’t know where that is, it’s probably you. The companies that are succeeding in in 2027 will be those who have embedded security monitoring, continuity planning, and so on into their operations. Now, easier said than done, right? It needs investment and time and money and people, but that’s the way of the world.

Kathy: Great. Thank you, Rich. Both Lindsay and Rich, that looks like that’s the last of our questions, and I just want to thank the both of you for an insightful discussion today.

And as a reminder to all of our attendees, we will be following up via email with a link to the recording and other resources. If you’d like to contact either Lindsay or Rich, their contact information is presented on this slide. And we thank you again, and we look forward to seeing you at another webinar in the future.


Questions? Contact us.

Threat Intelligence RoundUp: September

October 01, 2025

Our analyst team shares a few articles each week in our email newsletter which goes every Thursday. Make sure to register! This blog highlights those articles in order of what was the most popular in our newsletter – what our readers found the most intriguing. Stay tuned for a recap every month. We hope sharing these resources and news articles emphasizes the importance of cybersecurity and sheds light on the latest in threat intelligence.

1. Hackers breach fintech firm in attempted $130M bank heist – Bleeping Computer

Sinqia, Evertec’s Brazilian subsidiary, disclosed to the U.S. Securities and Exchange Commission (SEC) that its systems were breached by hackers on August 29, with the intent to conduct unauthorized transactions. The hackers specifically targeted their Brazilian Central Bank real-time payment system, Pix. Access to Pix was gained by the use of stolen credentials belonging to an IT vendor. Evertec has reported that an undisclosed portion of the $130 million has been recovered. No specific hacker group has been linked to the attack. Read full article.

2. Iranian Hackers Exploit 100+ Embassy Email Accounts in Global Phishing Targeting Diplomats – The Hacker News

Dream, the Israeli cybersecurity company, claims an Iranian-nexus group targeted embassies and consulates in Europe via a spear phishing campaign. The emails contained information regarding geopolitical tensions between Iran and Israel, and prompted individuals to open a Word document that “urges recipients to “Enable Content” in order to execute an embedded Visual Basic for Applications (VBA) macro, which is responsible for deploying the malware payload. The hackers sent emails to organizations located in the Middle East, Africa, Europe, Asia, and the Americas casting a wide net in an attempt to successfully gain access and harvest information. Article here.

Following extradition from Kosovo in May, Liridon Masurica has pled guilty in a Florida Federal Court. Masurica was the lead administrator of the online criminal marketplace BlackDB.cc from 2018 to 2025. Records show he pled guilty to leading the organization and has also been charged with five counts of fraudulent use of unauthorized access devices and one count of conspiracy to commit access device fraud. Read more here.

On September 12, the FBI “releasing this FLASH to disseminate Indicators of Compromise (IOCs) associated with recent malicious cyber activities by cyber criminal groups UNC6040 and UNC6395”. The alert follows the tracking of UNC6395, which targeted company’s support case information in Salesforce” that occurred from August 8th – 18th. The exfiltrated data was analyzed to extract secrets, credentials, and authentication tokens share din support cases. After discovery, Salesforce was able to revoke all Drift tokens and required customers to reauthenticate the platform. Mandiant disclosed information regarding UNC6040 in June, warning social engineering and vishing attacks connected to Salesforce accounts. Read here.

5. Airport disruptions in Europe caused by a ransomware attack – Bleeping Computer

Several European airports experienced a ransomware attack that affected the check-in and boarding systems. The attack targeted Collins Aerospace, the external provider for both systems. Beginning Friday evening, hackers targeted the MUSE (Multi-User System Environment) system, causing over 100 delayed and cancelled flights throughout the weekend. The attack was confirmed by the European Union Agency for Cybersecurity (ENISA) and the agency claimed the hackers were attempting to lock up data and systems in “an attempt to score a ransom”. All reports claim that the incident was resolved by Monday. Learn more.

6. AI-powered malware hit 2,180 GitHub accounts in “s1ngularity” attack – Bleeping Computer

On August 26, threat actors exploited a flaw GitHub Actions workflow in the Nx repository resulting in the exposure of 2,180 accounts. The telemetry.js malware is a credential stealer that targets Linux and macOS systems. The malware attempted to steal “GitHub tokens, npm tokens, SSH keys, .env files, crypto wallets”. Three separate phases were completed during the attack which led to 7,200 repositories being exposed. Read full article.

7. Massive anti-cybercrime operation leads to over 1,200 arrests in Africa – Bleeping Computer

In an August 22 press release, INTERPOL announced the arrest of 1,209 cybercriminals who targeted nearly 88,000 victims as part of an INTERPOL-coordinated operation dubbed “Operation Serengeti 2.0.” As noted in the statement, the operation took place between June and August 2025 and involved investigators from 18 countries across Africa as well as from the U.K. Nine private sector partners also assisted with the investigation. The operation resulted in the recovery of $97.4 million and the dismantling of 11,432 malicious infrastructures. Read full article.

8. Google nukes 224 Android malware apps behind massive ad fraud campaign – Bleeping Computer

Android ad fraud operation, “SlopAds”, was disrupted following 224 malicious applications on Google Play that generated 2.3 billion ad requests per day. The operation was discovered by HUMAN’s Satori Threat Intelligence team. The applications were downloaded over 30 million times and used obfuscation and steganography to avoid detection. Once detection was avoided “FatModule” malware would be activated. One evasion tactic used by the app was in the way it was downloaded. If installed through the Play Store it acted as a normal app, if installed by clicking through an ad “it downloads four PNG images that utilize steganography to conceal pieces of a malicious APK.” Learn more.


Make sure to register for our weekly newsletter to get access to what our analysts are reading on a weekly basis.

Cyber Security Awareness Month: Upcoming Content

October 01, 2025

In light of Cybersecurity Awareness month, DarkOwl is committed to sharing research, trends and industry news from our analysts.

Be the first to know as we release new research by entering your email below!

Upcoming Content This Month

BLOG

Threat Intel Round Up: September

Our analyst team shares a few articles each week in our email newsletter which goes every Thursday. Make sure to register! This blog highlights those articles in order of what was the most popular in our newsletter – what our readers found the most intriguing. Stay tuned for a recap every month. We hope sharing these resources and news articles emphasizes the importance of cybersecurity and sheds light on the latest in threat intelligence. Check it out.

it-sa Expo & Congress

We will be at it-sa 365, Europe’s largest trade fair for IT security and one of the most important dialogue platforms for IT security solutions. The trade fair covers the entire range of products and services in the field of cybersecurity: hardware, software, training and consulting services as well as Security as a Service. Stop by and meet with us at Booth 9 – 349. Meet us!

New Regulations & What They Mean for Your Supply Chain

This fireside chat explores challenges and opportunities of incoming regulations impacting cybersecurity in the UK and EU.

Greater digitalization brings with it an avalanche of Third Party integrations and supplier exposure. Rich Hanstock (pwn.legal) and Lindsay Whyte (DarkOwl) explore what new regulations mean for cybersecurity teams, and the change in attitudes required to reassure regulators and customers alike.

Discover how DarkOwl’s DarkSonar helps organizations build a resilient, responsive supply chain security strategy that aligns with Europe’s regulatory future. Transcription and recording here.

What is a DDoS Attack?

Cybersecurity might as well have its own language. There are so many acronyms, terms, sayings that cybersecurity professionals and threat actors both use that unless you are deeply knowledgeable, have experience in the security field or have a keen interest, one may not know. Understanding what these acronyms and terms mean is the first step to developing a thorough understanding of cybersecurity and in turn better protecting yourself, clients, and employees. 

In this blog series, we aim to explain and simplify some of the most commonly used terms. Previously, we have covered bullet proof hosting, CVEs, APIs, brute force attacks, zero-day exploits, doxing, data harvesting, IoCs, and credential stuffing. In this edition, we dive into DDoS attacks. Read it here!

AI vs AI: How Threat Actors and Investigators are Racing for Advantage

AI is transforming investigations, but also transforming adversarial tradecraft. How do we keep pace? From Telegram channels to dark web marketplaces, threat actors are using AI to accelerate crime, propaganda and deception. OSINT Combine and DarkOwl break down what’s happening behind the scenes and how investigators can keep up. Topics of discussion:

  • Exploration of how cybercriminals and terrorist groups are experimenting with AI technologies
  • Emerging dark web trends
  • Overview of AI-augmented investigation techniques
  • How investigators use AI for data collection
  • Detecting Disinformation and Synthetic Content
  • Live collaborative analysis by DarkOwl and OSINT Combine

Register here. Transcription to follow.

Stay tuned for our quarterly update blog highlighting new product features and collection stats updates. There is always something exciting coming from our Product and Collections teams and the team is excited to share this round of updates!

Cyber Hygiene at Work & Home

In this blog, we will highlight best practices for a safer digital life.

Command-and-Control Frameworks – Post Exploitation in Plain Sight

The blog “Indicator of Attack 101” introduces the concept of Indicators of Attack (IoAs), explaining how they differ from Indicators of Compromise (IoCs) and why IoAs are crucial for proactive cyber defense.

How Cybercriminals Build Trust in Darknet Marketplaces

Command-and-control (C2) frameworks are used by both red teams and cybercriminals. They provide a wide range of functionality and capabilities that make post-exploitation tactics easier and more effective. In simple terms, a C2 acts as a central server that connects to, communicates with, and manages compromised systems. It establishes persistence and allows the operator to control dozens of infected machines from one central environment.

Halloween: Spooky Finds on the Dark Web

The darknet can be a scary place. 👻 For Halloween, we will highlight some spooky findings from our analyst team that they have come across this past year. In the meantime, check out our previous edition where the team uncovered human organs for sale, human meat for sale, and hitmen for hire! Check out last years’ blog here.


Curious to see how darknet data can improve your cybersecurity situational awareness? Contact us.

Dark Web Pharmacy and Illegal PX Medication Sales 

September 23, 2025

Dark web “pharmacies” have become a global black market for prescription medications and counterfeit drugs. These underground vendors operate on hidden parts of the internet, accessible only with special software like Tor, and sell everything from opioid painkillers and anxiety meds to fake pills. Recent international crackdowns have led to hundreds of arrests across multiple continents, showing just how far-reaching and organized this trade has become. By using encryption and anonymous networks, dark web drug sellers connect with buyers around the world while evading traditional law enforcement. This blog looks at where these rogue pharmacies are found and the platforms they use to move drugs outside the law. 

Darknet Marketplaces

The majority of dark web pharmacy operations take place on multi-vendor marketplaces – hidden websites (with “.onion” addresses) that function like illicit versions of eBay or Amazon. Vendors set up listings for drugs, and buyers browse and purchase through the marketplace. These sites provide built-in escrow payment systems and customer review ratings, which help establish trust between anonymous buyers and sellers. Well-known examples from the past include Silk Road and AlphaBay, and new marketplaces continually arise to replace those shut down by police. 

Independent Vendor Sites

Some drug sellers also run their own standalone websites on the dark web. Instead of using a shared marketplace, they maintain a dedicated “storefront” hidden service. For example, one U.S. vendor continued operating a personal darknet website offering several types of illicit pills even after facing initial charges. These independent sites let a vendor control their platform, though attracting customers can be harder without the built-in traffic of a large market. They also lack the escrow protections of major marketplaces, meaning buyers have to trust the vendor directly. 

Encrypted Chats and Forums

In addition to Tor websites, a portion of illegal drug trade is arranged in private forums or encrypted messaging apps. Recent threat intelligence reports note a shift toward dealers making direct deals via platforms like Telegram, Signal, or Discord. Vendors advertise in chat groups or forums and then accept orders one-to-one, often taking payment in cryptocurrency. This method helps them reach less tech-savvy buyers (who may not navigate Tor) and avoid the fees or exit scams associated with big darknet markets. However, like independent sites, these direct transactions usually forego escrow – increasing the risk of scams or non-delivery if the buyer isn’t careful. 

Sourcing & Production 

  • Diverted Rx stock, bulk APIs from overseas brokers, or outright counterfeit precursors; opioids/benzos are common targets.  
  • Pill-pressing with dies/logos to mimic pharma tablets (e.g., “Xanax” bars); dosage is inconsistent and unregulated.  

Platform & Presence 

  • Multi-vendor marketplaces (escrow, ratings), independent Tor shops, and encrypted chat/closed forums; vendors diversify IDs to hedge takedowns.  
  • Leverage market feedback systems; promote “stealth,” shipping success rates, and refunds to drive buyer trust. (Observed repeatedly in takedown summaries and market analyses.)  

Security & Comms 

  • Tor access; PGP for messages; crypto payments (BTC; privacy coins like XMR increasingly preferred per EU assessments).  
  • Rotate handles, swap P.O. boxes/mailing points, segment roles (pressing vs. packing vs. posting), and avoid reusing identifiers.  

Listings, Sales & Payment 

  • Detailed SKU pages (dosage, “brand,” batch claims), pricing tiers, bulk discounts; some offer testing “proofs.”   
  • Funds held until delivery confirmation; DM/PGP comms for issues; off-platform direct deals used to avoid fees—higher scam risk.  

Fulfillment & “Stealth” Shipping 

  • Vacuum sealing, odor barriers, concealment in benign items, innocuous labels/returns; postal systems are the primary vector.  
  • Frequent post-office drops.  

Cash-out & Continuity 

  • Peel chains, mixers, P2P off-ramps. 
  • After market seizures, vendors relist quickly elsewhere under new monikers.  

Risk & Authenticity Note (for Rx specifically) 

  • A non-trivial share of “pharma” listings are counterfeit or misbranded (e.g., fake alprazolam/oxycodone); several rings pressed millions of pills sold as name-brand meds.  

Most pills sold on the dark web are not genuine pharmaceuticals. Law enforcement has caught countless vendors making their own tablets with pill presses, stamping them with real drug logos, and selling them as Xanax, oxycodone, or Adderall. Some are made with raw ingredients shipped from overseas; others are mixed in makeshift labs with no quality control. 

The danger is what’s inside: pills advertised as painkillers often contain fentanyl, and fake Adderall tablets have been found packed with meth. Even if a pill looks real, its contents may be wrong, too strong, or contaminated. A single counterfeit dose can be deadly. 

Scams are common too—some sellers simply take your money and never ship. Marketplaces use escrow to limit this, but if you buy directly through a website or chat, you’re on your own. 

Dark web pharmacies may look like convenient, no-questions-asked sources for prescription drugs, but the reality is far more dangerous. Most pills sold online are counterfeit, misbranded, or laced with powerful substances like fentanyl or meth. Even when products appear legitimate, there is no quality control, no guarantee of safety, and no way for buyers to know what they are really taking. 

While these underground vendors rely on encryption, hidden websites, and clever shipping tactics to stay one step ahead, law enforcement has shown that they are not untouchable. Major operations around the world have taken down marketplaces, seized millions of fake pills, and arrested key players. Still, new vendors and sites quickly emerge to replace the old ones. 

In the end, buying from a dark web pharmacy is a gamble with high stakes. The risks include wasting money, falling victim to scams, or, most critically, consuming a counterfeit pill that could be deadly. The safest choice remains the obvious one: only use medications prescribed by a doctor and dispensed by a licensed pharmacy. 

How Darknet Threat Actors Are Using AI and Why It Matters 

September 18, 2025

Artificial intelligence has quickly become one of the most disruptive forces in cybersecurity. On the surface, AI promises efficiency, smarter defenses, and automation. But it is also being exploited by criminals in underground forums and marketplaces. The darknet has always been a hub for phishing kits, ransomware gangs, and stolen data markets. What has changed is the speed and polish of those attacks. AI has not created new crimes, but it has made the old one’s sharper, more scalable, and harder to defend against. 

To understand the risks, you need to look closely at how threat actors are adopting AI in three areas where the damage is already visible: phishing, ransomware, and stealer logs. Alongside that, it’s worth exploring how the darknet economy itself is shifting to a subscription-based model that feels eerily similar to legitimate tech marketplaces. 

Phishing is one of the oldest tricks in the book. Traditionally, it relied on blasting out mass emails and hoping a few recipients clicked on malicious links. These campaigns were often riddled with error, bad grammar, odd formatting, and suspicious sender addresses. They worked well enough to snare the unwary, but many were easy to spot. 

AI has changed that. In 2023, tools like FraudGPT and WormGPT appeared for sale across darknet forums and Telegram channels. FraudGPT was promoted as a chatbot with “no limitations, no filters, no boundaries.” It promised to help criminals craft polished phishing emails, generate fake websites, and even produce malicious code. Sellers marketed it in the same way a SaaS startup would market legitimate tools, with clear feature lists and monthly or annual subscription options. Reports suggest prices started around $200 per month or $1,700 per year, and the tool quickly gained traction among low-skill actors. 

WormGPT took a similar path. Built on GPT-J, an open-source large language model, it was pitched as a blackhat version of ChatGPT. Access was sold for about $110 per month. Its purpose was direct and simple: create convincing phishing emails at scale. No broken grammar, no obvious red flags, just messages that looked like they came from HR, finance, or a trusted business partner. 

The sophistication of phishing is no longer limited to email. Voice cloning and deepfakes have introduced new angles. A call that sounds exactly like your CEO asking for an urgent wire transfer is no longer a far-fetched scenario. In fact, there have already been documented cases where voice cloning was used to defraud companies out of millions. With AI, creating those convincing imitations is faster, cheaper, and accessible to far more actors. 

Phishing is no longer amateur hour. It is a professionalized service where attackers can outsource creativity to AI. 

Ransomware groups are also adapting AI to their playbooks. Their goal is still the same: encrypt critical systems, steal sensitive data, and demand payment. But AI is streamlining the process. 

Some ransomware crews are using AI to refine malicious code and bypass defenses more effectively. Others are experimenting with automated infection chains where AI scripts help identify weak points in networks and tailor payloads to exploit them. In some cases, AI has even been proposed for ransom negotiations, where chatbots could pressure victims with manipulative tactics and personalized responses. 

This isn’t happening in a vacuum. Ransomware gangs are structured like businesses. They often run affiliate programs, recruit developers, and maintain support channels for buyers. AI fits neatly into that structure. It reduces the technical barrier, speeds up development, and frees attackers to scale operations. 

The real danger is not just that AI makes ransomware more efficient. It also makes entry into ransomware easier. Someone with little coding experience can join an affiliate program, buy access to AI tools, and launch a campaign without building malware from scratch. The result is more actors competing for victims, which increases the volume of attacks globally. 

If phishing is the entry point and ransomware is the hammer, stealer logs are the raw material that fuels countless other crimes. A stealer log is a collection of data siphoned from an infected machine: usernames, passwords, browser cookies, autofill data, cryptocurrency wallets, system details. For years, these logs have been sold in bulk on darknet markets. 

AI has made them far easier to exploit. Instead of combing through messy text files manually, criminals now use AI-driven tools to parse, filter, and prioritize data. They can search for keywords like “PayPal” or “VPN” and instantly extract the most valuable credentials. Dashboards sold with these logs make it simple for even unskilled actors to profit. 

Consider Rhadamanthys, a stealer that first appeared in late 2022. By mid 2024, version 0.7.0 introduced an unusual AI-powered capability: optical character recognition. It could scan images on infected devices and extract text, including cryptocurrency wallet seed phrases. This meant that even if users thought they were safe storing keys as screenshots, the malware could still retrieve them. 

Rhadamanthys is sold openly on forums. Licenses go for about $250 per month or $550 for 90 days. Its operators actively update the malware, provide customer support via Telegram, and advertise new features. In 2024, it was deployed through phishing campaigns disguised as copyright infringement notices, targeting victims across Europe, Asia, and the Americas. 

Beyond individual families, the stealer ecosystem is vast. Russian Market alone lists millions of stolen logs, and services like MoonCloud repackage them into searchable databases distributed via Telegram. These markets are increasingly structured and automated, looking more like data brokers than ad-hoc criminal sales. 

One of the most striking trends is how the darknet has adopted the language and business model of the tech industry. Gone are the days of one-off toolkits passed quietly between hackers. Today, the underground thrives on subscriptions and services. 

Fraud as a service. Phishing as a service. Ransomware as a service. Infostealers with monthly licensing models. AI has lowered the barrier to entry so far that the ecosystem resembles a SaaS marketplace more than a shadowy corner of the web. For a few hundred dollars a month, anyone can buy access to tools that rival those used by advanced threat groups. 

This professionalization is why the threat landscape feels so much more crowded. More people can play the game. The cost of entry is low. And the tools are good enough to work. 

If criminals are scaling with AI, defenders cannot rely on traditional defenses alone. Organizations need visibility into the spaces where these tools are sold and discussed. That is where DarkOwl provides value. 

DarkOwl monitors darknet forums, encrypted channels, and marketplaces where AI-enabled tools and stolen data appear. It can identify when a new phishing kit is advertised, when stealer logs containing company credentials are posted, or when chatter about impersonation campaigns surfaces. More importantly, DarkOwl delivers context. A stolen password alone is one data point. Context explains whether it is tied to a broader campaign, how it was obtained, and whether similar data is being circulated elsewhere. 

This intelligence is not meant to sit in a report. Organizations can act on it by building alerting workflows, so security teams are notified when company credentials show up in stealer logs, updating phishing playbooks with new lures seen in underground communities, and protecting executives and brands by monitoring for deepfake or impersonation campaigns. 

DarkOwl does not just collect data; it helps organizations use it. That difference is what turns visibility into defense. 

AI has not changed the fundamentals of cybercrime. Criminals are still phishing, encrypting, and stealing. What has changed is the scale and accessibility. FraudGPT makes phishing believable. WormGPT mass-produces scams. Rhadamanthys uses AI to scrape sensitive data from images. Marketplaces sell logs with dashboards and filters that look like professional analytics tools. The Darknet is evolving, and AI is accelerating the pace. 

The world cannot afford to ignore that shift. Defenders need to see what is happening in the underground as it unfolds. DarkOwl delivers that window, giving organizations the ability to anticipate threats, connect the dots, and respond before AI-driven attacks land. 


Have questions? Contact us.

Antivirus vs Antimalware: What’s the Real Difference and Do You Need Both?

September 16, 2025

We all know cybersecurity has its own language. As being cyber safe becomes more and more vital to both companies and individuals alike, it’s important to have a basic understanding on common terms. In this blog, let’s explore the subtle differences between antivirus and antimalware and if you need both.

The terms “antivirus” and “antimalware” are often used interchangeably. It is important to understand that while they are related, there is a historical difference and a functional distinction.

Antivirus

Antivirus is a type of software designed to detect, prevent, and remove malicious programs from a computer or network. While the name historically refers to software that protects against computer viruses specifically, the term has evolved to encompass protection against a wide range of cyber threats. It acts as a crucial defense against various digital threats that can harm your system, steal data, or compromise your privacy.

Traditionally, antivirus software excelled at:

  • Signature-Based Detection: This method relies on a vast database of “signatures” – unique digital fingerprints of known viruses. When a file is scanned, its code is compared to these signatures. If a match is found, the virus is identified and dealt with.
  • Preventing Replication: Its primary objective was to stop viruses from attaching themselves to legitimate programs and spreading across your system or network.
  • Cleaning and Quarantining: Upon detection, it would either “clean” (remove the malicious code from an infected file) or “quarantine” (isolate the infected file to prevent it from causing further harm) the threat.

One can think of antivirus as a specialist. It was exceptionally good at identifying and neutralizing the self-replicating, often disruptive, digital invaders that defined the early days of cybercrime.

As the threat landscape evolved, so did the sophistication of malicious software. Viruses were still a threat but now, we were up against worms, Trojans, spyware, adware, ransomware, rootkits, and more. This is where the lines begin to blur and the term “malware” enters. It is important to note that while all viruses are malware, not all malware are viruses. This difference between malware and virus is the crux of the difference between “antivirus” and the more encompassing “antimalware.”

Antimalware

Antimalware is a type of software designed to detect, prevent, and remove all forms of malicious software (malware) from computers and other digital devices. Unlike traditional “antivirus” that historically focused primarily on computer viruses, antimalware offers a broader, more comprehensive defense against the entire spectrum of digital threats.

Threats that antimalware defends against include:

  • Viruses: The original self-replicating programs that attach to legitimate software.
  • Worms: Standalone malicious programs that spread across networks without needing a host program.
  • Trojans (Trojan Horses): Programs that appear legitimate but hide malicious functions, often creating backdoors for attackers.
  • Ransomware: Malware that encrypts a victim’s files, demanding payment (ransom) for their decryption.
  • Spyware: Software that secretly monitors and collects information about a user’s activities without their knowledge or consent.
  • Adware: Software that automatically displays unwanted advertisements, often bundled with free programs.
  • Rootkits: Malicious software designed to hide the existence of other malware and enable persistent privileged access to a computer.
  • Keyloggers: Programs that record every keystroke made by a user, potentially capturing sensitive information like passwords.
  • Bots/Botnets: Software that allows an attacker to remotely control a compromised computer, often as part of a larger network of infected machines (a botnet).

Antivirus traditionally focuses on file-infecting threats; Antimalware is more adept at combating newer, evolving threats that may not be file-based.

Antivirus

  • specific type of protection
  • combats filed-infecting threats
  • basic scanning, detection, removal, and quarantine of viruses
  • relies on signature-based detection (databases of known virus “fingerprints”)
  • the original digital defense; the term is somewhat historical but often used generically (commonly used by the general public, but often refers to a broader “antimalware” solution)

Antimalware

  • broad and comprehensive protection
  • combats new, evolving threats that may not be file-based
  • real-time protection, advanced threat blocking, web/email protection, exploit prevention, sandboxing
  • incorporates more advanced, proactive methods like heuristic analysis and behavioral monitoring to catch unknown threats
  • the evolution of antivirus; the more accurate term for today’s holistic digital protection

Earlier this year, researchers at TrendMicro have observed the Chinese state-sponsored threat actor Mustang Panda (also known as Earth Preta) using a new technique to “evade detection and maintain control over infected systems.” Specifically, the hacking group uses the legitimate Microsoft Application Virtualization Injector (MAVInject.exe) to “inject payloads into waitfor.exe whenever an ESET antivirus application is detected.”  As highlighted in TrendMicro’s report, Mustang Panda is known for targeting victims in the Asia-Pacific region, with one of its recent campaigns utilizing a variant of DOPLUGS malware to target multiple countries in the region, including Taiwan, Vietnam, and Malaysia. The threat actor notably targets government entities, and “has had over 200 victims since 2022.” 

DarkOwl does not recommend having both an antimalware software and an antivirus software. This can cause conflicts and redundancies, as well as slow down your computer. It is recommended to have one comprehensive security solution active at a time. This single program will provide all the necessary layers of protection without causing conflicts. This is why many companies have moved from branding their products as “Antivirus” to names like “Internet Security,” “Total Protection,” or simply “Endpoint Protection” to reflect the broad range of threats they address.

As always, practice good cyber hygiene – check to make sure that your current software is up-to-date and offers multi-layered protection.

Ultimately, the distinction between “antivirus” and “antimalware” is not just semantic; it reflects the evolution of the cybersecurity landscape. While antivirus was our original digital defense, designed to combat the classic computer virus, today’s multifaceted threat environment demands a more comprehensive solution. A modern antimalware program is that solution, offering multi-layered protection against everything from file-infecting viruses to sophisticated ransomware and fileless malware.

As we’ve established, you do not need both—and for the sake of your system’s performance and security, you shouldn’t run both. The best practice is to choose one powerful, reputable security suite that is regularly updated. This single tool, combined with your own vigilance and good cyber hygiene, is your strongest defense against the full spectrum of digital threats today and in the future.


Don’t miss anything from DarkOwl. Subscribe to email.

Is Your City on the Dark Web? What Local Agencies Need to Know 

September 09, 2025

In 2023, investigators in a midsize U.S. city were tipped off to a darknet marketplace vendor offering “same-day delivery” of fentanyl-laced pills within specific zip codes. The listing named street corners and used coded references to local schools. It was not discovered by routine patrols or a community tip. It was found in an online space most local agencies never check: the dark web. 

The dark web is not just a place for global cybercriminal networks. It is a sprawling ecosystem where local-level threats are planned, traded, and discussed. Understanding what is being said about your city, and acting on it, can mean stopping crime before it happens. 

A Hidden Hub for Localized Criminal Activity 

Criminal forums, encrypted chat channels, and darknet leak sites often contain references to specific cities, schools, or government offices. These may range from targeted doxxing threats against police officers to lists of stolen IDs from local residents. Without visibility into these spaces, agencies risk missing critical intelligence (NIJ). 

Growing Scale of Criminal Commerce 

Dark web markets remain a preferred channel for selling drugs, stolen goods, counterfeit currency, and hacking tools. Europol has documented that some sellers specialize in hyper local delivery, building trust with buyers in their own city. One marketplace studied by the NIJ generated $219 million annually, a portion of which was linked to transactions tied to specific U.S. cities. 

Evidence of Real-World Impact 

The FBI’s Internet Crime Complaint Center (IC3) reported 880,418 cybercrime complaints in 2023, a 10 percent increase over 2022, with losses exceeding $12.5 billion (FBI IC3). While many of these cases start online, a significant number have local victims and suspects, with planning or stolen data originating from the darknet. 

  1. City and County Names – Drug vendors advertising “free delivery within [city limits]” or fencing stolen goods. 
  2. Schools and Universities – Targets of swatting threats, harassment campaigns, or worse. 
  3. Police Departments – Mentioned in extremist forums or ransomware leak sites after data breaches. 
  4. Hospitals and Public Services – Victims of cyberattacks where stolen patient data is posted for sale. 
  5. Street-Level Detail – Criminals using neighborhood or landmark names to coordinate illicit meetups. 

          These are not hypothetical. They appear regularly in open-source criminal case records and public takedown reports. 

          When local law enforcement gains visibility into the darknet, it often changes how investigations unfold. For example: 

          • Drug Enforcement – Narcotics units can identify vendors selling in their jurisdiction, connect them to street-level operations, and coordinate controlled buys. 
          • Cybercrime and Fraud – Financial crimes units can trace stolen credit cards, bank logins, or PII from local residents back to breaches. 
          • Threat Assessment – School resource officers or fusion centers can evaluate online threats referencing specific campuses. 

          This process often begins with keyword and geographic monitoring, searching for place names, zip codes, or organizational identifiers in darknet marketplaces, forums, and leak sites. Tools like DarkOwl can streamline this by indexing these spaces and allowing agencies to search them without direct engagement. All DarkOwl data is collected in compliance with U.S. Department of Justice guidelines, ensuring passive, lawful acquisition from darknet and darknet-adjacent sources. 

          In 2021, the Babuk ransomware group breached the Metropolitan Police Department in Washington, D.C., and leaked thousands of sensitive internal files on a dark web site. These included disciplinary records, intelligence reports, and details about confidential informants. The incident was described by cybersecurity experts as one of the most serious ransomware attacks ever against a U.S. law enforcement agency. Investigators had to rapidly assess the scope of the breach, contain the fallout, and communicate with the public while attackers continued to post stolen material. 

          In a separate case, 200 gigabytes of data from the Presque Isle Police Department in Maine was leaked online by Distributed Denial of Secrets (DDoSecrets). The dataset contained decades of emails, internal reports, and sensitive law enforcement files. While the organization chose not to make the entire dataset publicly available, the breach was confirmed and highlighted the vulnerability of smaller police departments to cyberattacks. 

          These incidents are a reminder that police departments of all sizes are potential ransomware targets and that early detection of leaked data on the dark web can help agencies respond more effectively. 

          • Legal Compliance – Work only with vetted intelligence sources that follow DOJ guidance. 
          • Evidence Handling – Ensure dark web data is preserved in ways that maintain chain of custody. 
          • Training – Provide investigators with skills to interpret darknet information and link it to real-world cases. 
          • Partnerships – Collaborate with state, federal, and fusion center partners to share findings. 

          Your city is likely being mentioned on the dark web, whether in a passing conversation or as part of a targeted plot. For local law enforcement, this is no longer an obscure cyber issue. It is a street-level problem with online roots. 

          By incorporating dark web monitoring into investigative workflows, agencies can spot emerging threats, connect them to local activity, and act before harm occurs. In a world where crime moves between the physical and digital in seconds, ignoring the darknet is no longer an option. 


          Learn how DarkOwl informs law enforcement investigations.

          Threat Intelligence RoundUp: August

          September 02, 2025

          Our analyst team shares a few articles each week in our email newsletter which goes every Thursday. Make sure to register! This blog highlights those articles in order of what was the most popular in our newsletter – what our readers found the most intriguing. Stay tuned for a recap every month. We hope sharing these resources and news articles emphasizes the importance of cybersecurity and sheds light on the latest in threat intelligence.

          1. ‘Chairmen’ of $100 million scam operation extradited to US – Bleeping Computer

          In an August 8 press release, the United States Attorney’s Office for the Southern District of New York announced the extradition of four Ghanaian nationals for participating in an international criminal organization “that stole more than $100 million from victims via romance scams and business email compromises.” The four individuals were reportedly high-ranking members of a Ghanaian criminal organization that targeted entities in the U.S. between 2016 and 2023. The defendants were extradited from Ghana and arrived in the U.S. on August 7. Read full article.

          2. New EDR killer tool used by eight different ransomware groups – Bleeping Computer

          According to BleepingComputer, eight different ransomware groups have been observed using a new endpoint detection and response (EDR) killer believed to be an evolution of the “EDRKillShifter” developed by RansomHub. EDR killers are a useful tool for threat actors as they turn off security products on targeted systems to help remain undetected. As of this writing, the eight groups seen using the new tool include RansomHub, Blacksuit, Medusa, Qilin, Dragonforce, Crytox, Lynx, and INC. Article here.

          Researchers at CTM360 have identified a new malware campaign dubbed “FraudOnTok” that targets users through fake TikTok Shops with SparkKitty spyware. According to the cybersecurity company’s report, the campaign is characterized by a dual attack strategy combining both phishing and malware to target TikTok users. The threat actors utilize replicas of TikTok Shop, TikTok Wholesale, and TikTok Mall to deceive users into believing they’re using the genuine platforms before stealing cryptocurrency wallets. Read more here.

          Researchers at SEQRITE Labs have observed a cyberespionage campaign targeting Russian aerospace and defense industries. According to the company’s report, the campaign has specifically targeted employees at Voronezh Aircraft Production Association (VASO), one of Russia’s largest aircraft production entities. The activity has been dubbed “Operation CargoTalon” and functions by delivering a backdoor called EAGLET to exfiltrate data. The threat actor is currently being tracked as UNG0901. Read here.

          5. Cybercrime Groups ShinyHunters, Scattered Spider Join Forces in Extortion Attacks on Businesses – Bleeping Computer

          Researchers at ReliaQuest have observed a shift in tactics used by the hacking group ShinyHunters that suggests possible collaboration with the Scattered Spider group. Following a year of limited activity, ShinyHunters’ campaigns resurged this summer with a series of attacks against Salesforce customers. These recent operations have used techniques previously observed in attacks attributed to Scattered Spider. Specifically, these have included impersonating IT support staff, using apps that masquerade as legitimate tools, VPN obfuscation, and “Okta-themed phishing pages to trick victims into entering credentials during vishing call.” Learn more.

          6. Hacker extradited to US for stealing $3.3 million from taxpayers – Bleeping Computer

          In an August 5 press release, the U.S. Department of Justice announced the extradition of a Nigerian national to the U.S. from France “in connection with hacking, fraud, and identity theft offenses.” According to the statement, the subject participated in multiple fraud schemes, including one targeting U.S. tax businesses to defraud the IRS since at least 2019. The scheme involved other Nigeria-based co-conspirators who used spear phishing emails to hack “several U.S. based businesses located in New York, Texas, and other states.” Read full article.

          7. CERT-UA Warns of HTA-Delivered C# Malware Attacks Using Court Summons Lures – The Hacker News

          In an August 4 press release, Ukraine’s Computer Emergency Response Team (CERT-UA) warned of a series of cyber attacks carried out by the threat actor UAC-0099 against “state authorities, the Defense Forces, and enterprises of the defense-industrial complex of Ukraine.”  As noted in the statement, the threat actor delivers MATCHBOIL, MATCHWOK, and DRAGSTARE malware via phishing emails. The emails are predominantly sent from UKR.NET addresses and are presented as official “court summons.” Read full article.

          8. US sanctions North Korean firm, nationals behind IT worker schemes – Bleeping Computer

          In a July 24 press release, the U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) announced the sanctioning of the North Korea-based Korea Sobaeksu Trading Company and three associated individuals for their participation in fraudulent remote IT worker schemes. As previously noted in DarkOwl’s Weekly Intelligence Summaries, the DPRK government uses these IT worker schemes to generate illicit revenue. The IT workers involved in the scheme use “fraudulent documents, stolen identities, and false personas to obfuscate their identities and infiltrate legitimate companies.” Learn more.


          Make sure to register for our weekly newsletter to get access to what our analysts are reading on a weekly basis.

          Copyright © 2026 DarkOwl, LLC All rights reserved.
          Privacy Policy
          DarkOwl is a Denver-based company that provides the world’s largest index of darknet content and the tools to efficiently find leaked or otherwise compromised sensitive data. We shorten the timeframe to detection of compromised data on the darknet, empowering organizations to swiftly detect security gaps and mitigate damage prior to misuse of their data.