Or, watch on YouTube
This podcast features DarkOwl Regional Director and OSINT expert, Lindsay Whyte, and Jennifer Woodard, Chief Product & Technology Officer at Logically.ai who discuss how AI is accelerating cybercrime by powering malicious large language models that generate phishing emails, malware, and ransomware with little user skill required. These tools dramatically scale attacks, leading to everything from personal account takeovers to multimillion‑dollar business email compromise and widespread ransomware incidents. While the threat is growing, Lindsay emphasizes that awareness, simple verification practices, strong security culture, and international cooperation can still meaningfully reduce risk — offering some optimism amid an increasingly complex cyber landscape.
Jennifer: Welcome back to AI on the Record, the podcast that brings together voices from media, policy, enterprise and civil society to explore where influence is heading, how AI is being governed and what decision makers should be paying attention to next. I’m Jennifer Woodard, your host.
Now, today, we’re going somewhere most of us don’t often go – into the darker side of technology, the shadowy corners of the internet and the world of cyber. And we’ll be looking at how AI is now intersecting with these spaces in ways that are both fascinating and, frankly, alarming. With me today is Lindsay White of OSINT UK. He’s an expert in open-source intelligence and cybercrime investigations. Let’s get into it.
So, with me today is Lindsay Whyte of OSINT UK. He’s an expert in open-source intelligence and cybercrime investigations. Lindsay, welcome to the show.
Lindsay: It’s a pleasure to be here. Thank you, even if the topic is somewhat a bit dark.
Jennifer: Indeed. Indeed, it is a little bit dark but thank you so much for being here. Could you just give us a quick intro into a little bit about your background and what you do?
Lindsay: Sure thing. So, I’m a former British soldier and now I’m the co-founder of the UK community, which is a volunteer run, not for profit seeking to bolster the UK’s intelligence capabilities by reintroducing in-person interactions into the world of security, but also at the same time crowdsourcing, new innovations in the rapidly growing world of open-source intelligence technology. My day job is working for DarkOwl, which is a leading darknet intelligence collections company, which was actually founded by the same person that founded the Tor Project itself. So, we illuminate darknet data for governments and security professionals around the world.
Jennifer: That’s very interesting. It’s incredible to hear. And, you know, as you’ve explored these spaces, I’m assuming you’ve seen technology evolve and now that we’re kind of in the age of AI. AI is coming into its own. AI is now part of this kind of cybercrime dark web story. Could you help us understand a little bit about how cyber criminals are using AI, and whether that’s something that we should actually be worried about?
Lindsay: Absolutely. I think it’s a great place to start because, you know, you and I know ChatGPT. I think most people have at least heard of ChatGPT by now. And that’s what, you know, we call a large language model. Basically, it’s a very sophisticated AI that can understand and generate human like text. Now, big companies like OpenAI and Anthropic, they build things which you call guardrails. So, these are rules that prevent their AI from helping you do bad things.
So, if you ask ChatGPT to hack someone’s bank account, it will politely refuse. But malicious large language models (LLMs) – they are the sort of evil twins and they’re built from scratch or modified specifically to remove those sorts of guardrails. They’ll happily help you craft phishing emails, write malware, generate ransomware code, ransomware notes, you name it. Really. So, what’s interesting, of course, is that already this sort of malicious LLM ecosystem, they’re already selling their software in subscription form, so you’ll be able to buy malicious LLM’S on a monthly plan, on an annual plan, a lifetime. I mean, there’ll probably be Christmas discounts, you know, before long. So, it’s basically cybercrime as a service, as the security industry have always known it. But now with that AI superpower. Yeah, I wish I was joking, but that’s the real reality of it.
And, I guess to understand how this matters, we need to talk about the dual use dilemma, which I know, Jennifer, you probably know a lot more about from that sort of policy perspective. But, you know, fundamentally, this dual use dilemma in AI is about, how you use the exact same technology for both good, but also for, you know, for harm and how it can get sort of weaponized for harm. You know, a little like nuclear physics. It’s something which can power a city for, for free and transform a society. But it can also be used in weapons to sort of level a city. So, AI kind of has to be thought of, I think, in the same kind of same kind of way. You know, it gives us the same capabilities, allow a company to automate customer support for the good, or help students, write better essays at university, but it also helps criminals scale up their tax. So even if the technology is neutral, the intent is not.
So, I guess this is where it gets pretty interesting because, you know, the same linguistic precision that makes AI great at, you know, university essays and helping write emails can also make incredibly convincing phishing emails. So, the same coding ability that helps developers debug software, can actually customize malware in the same amount of time, and that’s kind of what makes it tricky from a regulatory perspective. I guess for me, what really concerns me is the way that AI is now democratizing cybercrime, because it used to be that attacks required a certain level of skill. So, you know, language skills, a certain amount of coding knowledge, a deeper understanding of like social engineering per culture in which you’re trying to action this, this attack. This is now available to anyone. So, you know, we’re talking about a skill level between someone who maybe knows how to use Google and understands basic computer concepts. That’s all you need now. So, the days of being an expert coder or a wizard of some description to run a sophisticated attacker are over, you know, and that’s kind of the reality that we’re living with. You know, would you rather face, as someone said it to me once, you know, would you rather face one expert swordsman or a thousand people with guns and you know, these malicious LLMS, they are giving everyone a gun. It’s scale over skill, and from a perspective of cyber defense, that’s pretty terrifying because now attacks that used to take days of research, maybe weeks of research and hours of coding can now be done in minutes by someone who has no prior experience in the field.
Jennifer: Wow, that’s really jarring. And like you said, that’s the reality that we’re living in right now. These aren’t even hypothetical risks anymore. I mean, I remember years ago people talking about this might be on the horizon. What we’re actually living with this right now. It seems like it almost snuck up on us in some cases. So, the tools that you’re talking about to develop these, you know, types of malignant actions, they’re actively in use. Could you walk us through some examples of what those tools look like? I mean, what are they actually called. Are they methods. Could you just kind of walk us through that?
Lindsay: Yeah, yeah. Tragically, that is the case that these already do exist. So, two big names have emerged in the last few weeks are WormGDP, GPT, sorry, and KawaiiGPT. That’s actually wrong. Uh, WormGPT has been around for a while, but I’ll talk about WormGPT specifically because I think it really opens up everyone’s eyes because this is something that appeared, I think it was sort of summer 2023 on underground forums, like hack forums. For those who don’t know, hack forums is pretty much exactly as it sounds, not like friendly Reddit threads. These are places where cyber criminals congregate and share ideas. And WormGPT was being hawked, a bit like the latest smartphone. So, the marketing, I think, even included like a creepy little character with red eyes, it was like the most unsubtle kind of thing, but basically what they were advertising is an uncensored alternative to mainstream ChatGPT – no ethical boundaries whatsoever.
And it was built on open-source model. It was fine-tuned specifically against malicious data sites so malware code phishing email templates, exploit write ups and that sort of thing, and it directly trained itself on that model. So, it was mainly being used for business email compromise. So, that’s where criminals basically impersonate a CEO or a company supplier or something like that. And it tricks employees into sending sensitive information or wiring money outside of the company as part of a scam and normally with these business email compromise emails and messages that we receive, there were telltale signs that it was a scam. So, there would be weird grammar, it would be awkward phrasing, and that would sort of tip us off. But with WormGPT, it could, and it can, generate perfectly fluent professional sounding messages, which even the most savvy employee could fall for. And, and I guess, you know, ironically, WormGPT became a bit of a victim of its own success because the media exposure it got was so big that the creator actually shut it down quite soon after setting it up because it got so much heat. But of course, the problem with that is that the cat was already out of the bag, and it meant that a lot of copycat GPT appearing on the market and other versions started coming out. And, you know, currently you’re looking at sort of WormGPT4, which is more commercialized. It’s got a really slick website.
Remember, I’m talking about a malicious piece of technology here. They have a subscription pricing model. I think it’s like 50 bucks a month, a hundred bucks a year and 200 bucks for, like, lifetime access. So, it’s very affordable. It becomes very problematic. It’s got a big sort of telegram ecosystem that’s growing. It’s like running itself like a legitimate software company. And, you know, people have tested this. It can spit out ransomware notes, ransomware script, with encryption to infect computers. I think the ransomware note that it can generate gives you, it provides the level of detail where it’s instructing a victim how to buy Bitcoin to pay the ransom if they don’t already know how to do it and what sites to use. It’s very smart.
As I mentioned, there’s another one called Kawaii. I think I’m pronouncing that right – KawaiiGPT, basically just Google KawaiiGPT. And that takes a slightly different approach. It markets itself as like a friendly, playful chatbot but it’s, you know, it’s completely free. It was on GitHub until very recently. It may still be there and basically allows people to download it for free. Some security researchers have started to ask it to like, as in legitimately to see its power, test if it can write script for lateral movement. So lateral movement is where an attacker basically goes into one computer in a network and then crab walks into other computers on that network like dominoes falling. It’s able to do all of these things and is pretty terrifying, really, because all of this can be generated in a few seconds. So, yeah, I think overall, what’s worrying about both of these tools is that they’re creating, like any professional tool these days, an ecosystem of developers, of communities, of people, you know, giving feedback and then the product being improved. It’s like these telegram channels, they read a bit like LinkedIn for criminals. It’s pretty surreal.
Jennifer: Yeah, it’s democratization and the worst possible sense. Right? I mean, it’s really the ability to scale this like, never before. And the barrier to entry being so low that just about anybody has access to these types of tools. Anyone who wants to do, do harm. When you lay it out like that, it’s really, I mean, it’s really scary how big this impact is. So, you mentioned a little bit about the victims. You know, you referenced kind of like corporation CEOs. What happens to the victims of these types of attacks? What’s the aftermath of something like this happening?
Lindsay: Well, I mean, the impact does kind of range between, you know, the corporates that you mentioned, right down to sort of like individuals, who fall for this. It can be anything from just being really annoying to completely devastating and life destroying.
I mean, at the lower end, a successful phishing attack that compromises an individual account, you know, an email gets hacked or someone’s social media gets taken over. It’s embarrassing. It’s potentially financially damaging. It might be recoverable but, you know, people can lose their accounts for a while. They might lose their identity. So, it can be a real hassle. It may not necessarily be life destroying, but when you scale up the chain and you start then looking at business email compromise, which I said is the main focus initially of WormGPT, for example. That’s when it gets very serious because a company employee can get tricked into wiring money to a scammer’s account. We’re talking six, seven figures. I’m not exaggerating. I mean, companies have literally gone bankrupt because of successful business email compromise attacks. And imagine you’re the CFO and you get what looks like a legitimately urgent request from the CEO to wire funds for like, an acquisition or something else. That money is then gone. It’s irretrievable and you’re left kind of explaining to the Board how you just wired all of that money out of the business.
And then at the top end, you’ve got ransomware attacks where all of the cybercrime sort of focuses, I’d say right now, where an attacker gets into a network, they spread through the system, they encrypt everything, and demand payment to unlock it. And we’ve seen this happen to hospitals, you know, doctors not being able to access patient records, manufacturers shutting down operations for weeks and for manufacturers, operations being shut down is millions and millions of pounds lost in production. School districts not being able to access their pupil records or that kind of thing before exams. You know, the impact then isn’t just financial. It’s actually emotional as well. And that’s pretty immense. So, I mean, LMS (language models) are making all of these things easier – the sort of the improvements in how it generates convincing language for phishing emails, instant code generation for malware. These tools are accelerating every single phase of an attack. And as I said, what used to take a team, a skilled team, days and weeks can now be done by one person in a matter of hours. Again, imagine someone who is maybe a disgruntled former employee or a, I don’t like to say teenager stuck in their bedroom because that’s such a stereotype, but you don’t need much to trigger someone to then pay that $50 monthly subscription for one of these malicious GPTS. You know, you just need a fraction of these people paying and getting access, and then suddenly you’ve got an enormous, enormous problem on your hands. These aren’t, you know, the companies behind them, of course, you know, they’re not hobbyists themselves, that they are themselves very professional business operations with customer support and engineers and all this sort of thing. Just because you and I could use it and people without much knowledge can use it that does not reflect the level of sophistication on the other side of the fence. They are professional businesses. Right. That’s something that people often forget. These people really know what they’re doing. They’re very well organized. You know, they learn how businesses work. They’ve worked in legitimate businesses in the in the past more often than not.
Jennifer: And cutting edge, it sounds like cutting edge technology developers as well. They’re not just a mom-and-pop shop. Wow. That’s hard to hear, quite alarming. But, you know, in spite of all this, I assume that something is being done to mitigate these risks, right? This is a risk to every sector, every part of the globe. It’s risk to economies worldwide. What is happening on that front? Can these tools actually be stopped, or is this kind of a new reality that we need to adapt to?
Lindsay: This is the problem, I suppose, is that, it does get complicated because there is no silver bullet. If we look to the sort of legal and regulatory side of things, we are sort of in murky waters and you’ll probably know this, that – okay, the original say, WormGPT, this malicious LM was shut down voluntarily by its creator but then we do have other GPT’s, you know, on GitHub and still running. So, you’re going to have to ask like legitimate website, the hosting code that they have to police what kind of code people can share. And that opens up a whole can of worms, to pardon the pun because, you know, here’s the thing. You know, these exact same tools are crucial for legitimate penetration testing.
Penetration testing is an absolutely vital part of cybersecurity posture because essentially what penetration testers do, these are the good guys who are hired to break into a system to find vulnerabilities so that you can bolster your defenses. So again, we’re into that dual use dilemma. The tool itself is neutral and that makes regulatory regulation incredibly difficult in my opinion. Because how do you ban something that has a legitimate use. But I guess there are other approaches that need to happen. I mean, again, I’m not an expert on it, but developers of mainstream API models need to continue with their safety measures. So, making it harder to jailbreak these systems and that sort of thing. Law enforcement needs to get better at tracking the financial flows – so identifying the people behind these cryptocurrency flows, and pursuing them, because as part of my day job at DarkOwl, that’s what we spend our time doing is illuminating dark web forums and crypto currency. And then, I guess, most importantly, is promoting international cooperation on these subjects, because this means absolutely nothing if we don’t have some global approach to countering this because cybercrime is, in its nature, just borderless. You know, you’re always going to attack the jurisdiction that is far away from your own as possible, right? That’s just that’s just common sense if you’re a criminal. So that’s pretty important. Obviously, there’s other things on the side of sort of like the EU AI act, which I’m not quite as familiar with.
But for individuals, there’s quite a bit you can do. I want to be positive here and this is where I get optimistic because even the most convincing phishing email fails if people are trained to verify requests through secondary channels. If your CEO sends you an email asking for an urgent wire transfer, picking up the call, picking up the phone and calling them is what you need to do, and that’s where, you know, the AI model kind of fails because simple practices like this will defeat AI generated attacks in person and face to face options as well to kind of do this, you know, companies specifically. Yes, there’s sort of layered defenses. So, there’s various cybersecurity practices you can put in place good security practices, a healthy amount of skepticism. These are all things that will help. I mean, fundamentally, this is an ongoing arms race. Attackers are going to develop new tools, defenders are going to attack. Attackers are going to evolve. Defenders respond. It’s just going to keep going on and on. It’s been like that in cybersecurity forever. And so, nothing’s really changed.
Jennifer: Right? It’s about staying one step ahead of the bad guys. It’s the same type of a situation as in cyber, for the past, you know, 20, 30 years. Yeah. I’m glad that you bring a little bit of optimism into this, because I’d like to hear, you know, from a technology perspective, given how difficult this is, it sounds almost insurmountable. What is it? What is something that actually gives you hope? Something that makes you think from a technology perspective that we can actually kind of make a difference here?
Lindsay: Yeah, I think there is some hope. And just to sort of flesh out, you know, my optimism on this. Increased awareness does help things tremendously. You know, conversations like this where we’re educating people about these threats do make a real difference. As someone said, an informed public is the best defense. So, when people understand that emails can be generated by AI, you know that perfect grammar is no longer the guarantee of legitimacy, that verification is essential and that sort of thing. This really does change the game. You can have the most sophisticated technical defenses in the world, but if your employees know to pick up the phone and verify a wire transfer request you have just defeated there, and then a multi-billion-pound AI powered attack with a 30 second phone call.
It’s not necessarily about blocking specific tools. I think that’s a losing game. It’s about building systems and cultures to be resilient at scale, and understand the speed of how AI evolves. You know, bringing back human interactions. I’m a big believer in this, whether we do this with, with government or with our own companies – nothing can beat that human interaction to verify something 100%. I think one of the things I’ve always worried about is the way in which and, you know, one thing we haven’t really spoken about is the way in which nation state actors are and governments are actually funding and promoting a lot of this malicious LLM use. Sometimes I think democracies look to the digital world as a form of efficiency, and I think we’re entering into that, and that is right. I mean, it’s changed everything. It’s been revolutionary. But we may be entering into a period where it’s giving us diminishing returns, and we need to return to more in-person interactions, in-person verification. What that looks like, I’m not entirely sure, but you always have that. And I think, you know, understanding that and recognizing that we can’t just rely on digital systems for everything could be counterproductive.
There’s things that are sort of keeping me up at night. I think the accessibility, you know, something that used to need a lot of skill, doesn’t need a lot of skill. There aren’t those barriers anymore. But I think, you know, there is something that we can rely on. And that’s the sort of human element as both the, the biggest weakness, but also the greatest strength that we have.
Jennifer: Yeah, that is actually encouraging, reassuring. You brought up some topics that kind of bring back the optimism to the conversation. So, before we go, I’d like to ask our guests if listeners could take one thing away from today’s conversation about AI and cybercrime, you know what they really, really need to remember? What should it be?
Lindsay: What I would suggest people do is that they start to really think in a hybrid mindset when building technology, managing people, improving society. Don’t rely on technology to save you. Don’t rely and think likewise that technology is going to ruin you. The fact is, it is just another tool. Are we building a society and are you building a business I suppose that takes into account all of these various facets? Sorry, I can’t be more specific than that. I’m still learning a lot about AI. I can’t claim to know everything about how AI is being used within the cybercrime world. It is evolving every second but I think we need to understand and appreciate more the benefits of thinking holistically when talking about even the most digital of phenomena.
Jennifer: And that is a great way to end it, because that’s something that’s in our hands. It’s all about understanding awareness, educating ourselves, and kind of staying ahead of the curve. So, thank you so much, Lindsay Whyte, for joining me today on AI On the Record. It was a pleasure having you here. Even though the topic was a little bit dark, there is some hope for the future, it sounds like. And thank you so much for joining us.
Lindsay: It’s a pleasure, Jennifer. Thank you very much indeed.
Jennifer: That’s it for AI on the record. Thanks so much to Lindsay Whyte for scaring us a little but also adding a little hope in the struggle of good versus bad in the world of AI. If you found this conversation valuable, share it with someone who thinks deeply about tech, trust, and the future of information. Until next time, I’m Jennifer Woodard. Thanks for listening.
Products
Services
Use Cases