WormGPT

WormGPT: the “no-rules” chatbot that supercharges scams — and what it really means for the UK

A quick safety note

WormGPT is marketed for criminal use. This article explains what it is and why it matters without providing instructions that would help anyone attack UK organisations or individuals.


What is WormGPT?

WormGPT is the name given to a paid, underground “ChatGPT-style” chatbot that surfaced in mid-2023, advertised on cybercrime forums as an “uncensored” assistant for writing phishing lures and other illegal content. Reporting and threat-research describe it as a black-hat alternative to mainstream AI tools that enforce safety limits.
Sources: The Hacker News (reporting SlashNext research)KrebsOnSecurityWIRED

One widely cited detail: researchers reported WormGPT was built on GPT-J, an open-source language model from EleutherAI (rather than OpenAI’s GPT models).
Source: The Hacker NewsInfoSecurity Europe

Picture: WormGPT “ad-style” promo imagery (example)

Source page: Abnormal AI


How does WormGPT work?

At heart, WormGPT is less “magic hacking robot” and more a writing-and-automation engine that helps criminals produce persuasive text fast.

It’s a chatbot interface sold as a criminal service

Security journalist Brian Krebs reported WormGPT was sold as a private service on forums, with pricing described in the hundreds to thousands of euros for licences.
Source: KrebsOnSecurity

It removes (or bypasses) mainstream safety guardrails

The selling point is simple: where consumer chatbots often refuse to produce harmful content, WormGPT is marketed as “uncensored”, meaning it will output content that enables scams.
Source: WIRED

It’s especially useful for phishing and BEC (business email compromise)

Threat researchers tested WormGPT by asking it for BEC-style messaging. SlashNext’s Daniel Kelley described the output as “remarkably persuasive” and “strategically cunning” (those are his words, quoted in multiple reports).
Sources: KrebsOnSecurityWIREDThe Hacker News

Picture: what “BEC bait” looks like in the real world (example)

Source page: UpGuard BEC explainer


How effective is WormGPT at hacking UK networks and users?

The straight answer: it’s more effective at hacking people than hacking networks.

WormGPT doesn’t need a zero-day exploit to do damage. Its power is that it can help attackers:

  • write more believable emails and messages
  • tailor wording to a company, role, or ongoing project
  • iterate quickly (20 variants in minutes, not hours)
  • run scams at scale, including by non-native English speakers

That aligns with what UK authorities say is likely: AI’s near-term cyber impact is expected to show up in reconnaissance, social engineering, and scaling attacks, rather than instant “push-button” intrusions.
UK sources: NCSC — The near-term impact of AI on the cyber threatNCSC — Impact of AI on cyber threat from now to 2027NCSC Annual Review 2025 (Chapter 1)

Expert quote (entry barriers)

WIRED quotes Daniel Kelley saying the models are “notably useful for phishing” because they lower entry barriers, especially where English proficiency is a constraint.
Source: WIRED

Why the UK angle matters

UK organisations are heavily exposed to BEC-style fraud because:

  • supply chains are global and email-heavy
  • invoice and payment workflows often rely on trust and speed
  • hybrid working increases reliance on messaging over in-person checks

AI-assisted BEC doesn’t need malware; it needs one convincing payment diversion or one login handed over.


What WormGPT is not (despite the hype)

“Undetectable malware” claims are often marketing

Underground sellers routinely overclaim. Even WIRED notes there are “outstanding questions about the authenticity” of these criminal chatbots, because criminals can also scam each other.
Source: WIRED

So while WormGPT can help generate text (and sometimes code), breaking into a well-defended UK network still usually requires vulnerability exploitation, credential theft, misconfiguration abuse, or human error — not just a chatbot.


What to do about it (practical UK-focused steps)

For UK organisations
  • Harden payment processes: verify bank detail changes out-of-band (not via email).
  • Treat email as an attack surface: tighten SPF/DKIM/DMARC, quarantine lookalikes, make reporting easy.
  • Lock down identity: MFA everywhere (especially email/admin), conditional access, least privilege.
  • Train for “clean” phishing: AI removes typos; teach staff to verify process and context, not spelling.

UK guidance: NCSC guidance and reports hub

For UK individuals
  • If a message creates urgency + secrecy (“don’t call, just pay”), assume it’s a scam until proven otherwise.
  • Protect your email account first (MFA, strong passwords): it’s the master key for resets.

Source links and further reading (live links)

WormGPT reporting and analysis
UK and European threat context

We have created Professional High Quality Downloadable PDF’s at great prices specifically for Small and Medium UK Businesses our main website. Which include various helpful Cyber related documents and real world scenarios your business might experience, showing what to do and how to protect your business. Find them here.

Share