AI Hacker

The AI Tools Powering Cyber Attacks on the UK (and Why It’s Getting Difficult to Spot Them)

If you work, bank, shop, date, or do school runs online in Britain, you’re already in the blast radius of AI-enabled crime. The shift isn’t “hackers have become geniuses overnight”. It’s that AI is turning old tricks—phishing, impersonation, fraud, data-theft—into cheap, scalable, professionalised services.

European law enforcement has been blunt: large language models (LLMs) and “generative AI” are improving the effectiveness of social engineering by tailoring messages and automating criminal processes. UK ministers are also warning that deepfakes (fake images/video/audio) are now widely used in scams and impersonation—often requiring “little to no technical expertise”. 

Below are the main AI and AI-adjacent “cyber tools” currently shaping attacks that hit UK people and organisations—based on top reporting, UK government and European law-enforcement sources, and major security research.


What “AI tools used to hack the UK” really means

Mostly: AI that supercharges social engineering

Despite the Hollywood image, most financially successful cybercrime still starts with deception, not wizardry: a convincing message, a believable voice note, a fake Teams call, or a deepfake video.

Europol’s 2025 IOCTA report highlights exactly this: LLMs can generate phishing text that matches local language and cultural nuance, improving campaign effectiveness. 

So the “top tools” tend to fall into three buckets:

  • Malicious or “unrestricted” AI chatbots built for scams and crime
  • Mainstream AI models repurposed (via jailbreaks, wrappers, or illicit fine-tuning)
  • Synthetic media tools for impersonation (voice, video, images)

Evil Chatbot

1) “Evil” chatbots: malicious LLMs sold to criminals

WormGPT and the “WormGPT-style” clones

One of the best-known examples is WormGPT—a criminal-branded chatbot marketed for writing phishing and business email compromise (BEC) lures. Reporting in 2025 described new WormGPT variants built by hijacking or wrapping mainstream models (including Grok and Mixtral), then steering them with prompts to behave like an “unrestricted” assistant. 

These services matter because they:

  • improve grammar, tone, and persuasion (making lures look “corporate”)
  • generate large volumes of tailored messages quickly
  • lower the skill barrier for scammers targeting UK staff and customers

Source links:

FraudGPT, EvilGPT, DarkGPT and “brand-name” criminal assistants

Threat reporting frequently groups FraudGPT and similar “dark chatbots” into the same phenomenon: criminal services that mimic helpful assistants, but are designed to produce scam content, impersonation scripts, and other abusive outputs. 

Source links:


2) Mainstream AI models repurposed for crime

Jailbroken assistants and “wrapper bots”

A key change since 2024: criminals don’t always need to build a model from scratch. They can:

  • use mainstream models,
  • add a “system prompt” wrapper,
  • and market it as a criminal tool.

That’s why the “tool” may look like a Telegram bot or a web panel, while the underlying engine is a known model. The CSO report describes this pattern explicitly: threat actors adapting existing LLMs rather than building bespoke ones. 

AI that speeds up reconnaissance and targeting

Separately, security agencies warn that AI makes it faster to:

  • draft targeted outreach (spear-phishing),
  • translate convincingly,
  • and scale outreach across multiple UK-facing brands.

Europol notes LLMs can automate parts of criminal workflows and help tailor communications to victims. 


https://lifelock.norton.com/content/dam/lifelock/learn/article-main/ai-scams-01.png

3) Deepfake and voice-cloning tools: “hacking” humans, not servers

Impersonation as a growth industry

In February 2026, the UK government described deepfakes as a growing risk to “every person in the UK”, used for fraud, impersonation and harmful abuse—and stressed that the tools are getting cheaper and more available. 

Jess Phillips MP’s quote in that announcement captures the real-world impact (and why businesses should care):

“A grandmother deceived by a fake video of her grandchild… A business defrauded by criminals impersonating executives.” 

This isn’t theoretical. In practice, it shows up as:

  • fake CEO / finance director voice notes demanding urgent payment
  • fake recruiter calls harvesting documents and credentials
  • deepfake videos used to “prove” legitimacy in investment scams

Source links:


4) Crime-as-a-Service platforms now bake in AI

Phishing-as-a-Service, data brokering, and automated social engineering

The modern cyber economy is modular: one group steals credentials, another sells access, another deploys ransomware or empties accounts.

Europol highlights:

  • social engineering as a prevalent technique,
  • a thriving ecosystem selling access to compromised systems/accounts,
  • and AI/LLMs improving the efficacy of social engineering and automation. 

This matters for the UK because it means attacks scale quickly across:

  • NHS suppliers and local councils
  • SMEs and charities with weaker security budgets
  • retailers and logistics firms with high customer data volume

AI

5) The UK’s “AI security” problem: attackers innovate faster than institutions

Policy and policing are playing catch-up

A University of Cambridge / Alan Turing Institute policy unit (CETaS) has warned UK law enforcement is not adequately equipped to prevent, disrupt, or investigate AI-enabled crime at the necessary pace—and argues for faster scaling of AI capability across policing and disruption efforts. 

Meanwhile, security agencies pushing “secure by design” thinking are effectively saying: we can’t repeat the early-internet mistake of bolting security on later.

Rob Joyce (NSA Cybersecurity Director), speaking alongside partner agencies including the UK’s NCSC, put it like this:

“We wish we could rewind time and bake security into the start of the internet. We have that opportunity today with AI. We need to seize the chance.” 

Source links:


So what are the “top AI hacking tools” hitting the UK right now?

A practical shortlist (without the how-to)

Based on the sources above, the most consequential “AI tools” in UK-relevant crime are:

  1. Malicious LLM chatbots marketed for phishing/BEC and scam-writing (e.g., WormGPT-style services, FraudGPT-style branding). 
  2. Repurposed mainstream LLMs wrapped/jailbroken into criminal assistants (often delivered via Telegram bots or web panels). 
  3. Deepfake generation and voice cloning tooling used for impersonation fraud and abuse. 
  4. AI-assisted social engineering at scale, embedded in the wider “crime-as-a-service” ecosystem (phishing kits, data brokering, access sales). 
  5. AI-enabled automation of victim targeting (localised language, tailored persuasion, multi-language scaling). 

What UK businesses and individuals should take from this

The defensive mindset shift

If the old advice was “watch for bad spelling”, the new advice is:

  • assume the email is well-written,
  • assume the caller sounds real,
  • and verify payment/identity out-of-band every time.

The UK government’s deepfake announcement is explicit that impersonation scams are getting easier to run. Europol’s analysis suggests the same trend at European scale: AI improves tailoring and automation of manipulation. 


References (live links)

We have created Professional High Quality Downloadable PDF’s at great prices specifically for Small and Medium UK Businesses our main website. Which include various helpful Cyber related documents and real world scenarios your business might experience, showing what to do and how to protect your business. Find them here.

Share