Cyber News UK

Are Cyber Criminals Using AI Tools More and more to Cyber Attack English Targets

Why AI-enabled cybercrime changes the defensive job

The UK has to defend at “internet speed”, not “committee speed”

The NCSC’s assessment is blunt: AI will “almost certainly” make elements of intrusion operations more effective and efficient, increasing the frequency and intensity of threats, and creating a “digital divide” between organisations that keep up and those that don’t. 

That matters because English targets (government, councils, NHS suppliers, SMEs, big brands) are already facing huge volumes of commodity attacks. The UK Government’s Cyber Security Breaches Survey 2025 estimates ~7.87 million phishing cyber crimes and ~595,000 hacking cyber crimes experienced by UK businesses in the previous 12 months. 

So the government’s defences need to do two things at once:

  1. Reduce national exposure at scale (because volume wins).
  2. Raise the cost of success for smarter, targeted attackers (because they will keep coming anyway).

Houses of Parliament

What UK government defences need to do to contain AI-accelerated attacks

1) Double down on “prevention at scale” (and expand it beyond government)

The UK already has a strong model here: Active Cyber Defence (ACD). The NCSC reports over 1.2 million phishing campaigns removed, with half taken down within an hour, and 26,000+ phishing campaigns targeting central government disrupted, with 79% resolved within 24 hours. 

To keep AI-enabled crime from winning on volume, the government needs to:

  • Widen ACD-style services (takedown, warning, protective DNS, web/mail checks) so more UK organisations benefit by default, not just those already “plugged in”. 
  • Industrialise scam infrastructure removal (domains, URLs, lookalike sites), because AI will generate more convincing lures faster than human moderation teams can blink. 
2) Make “basic cyber hygiene” non-optional for public sector and suppliers

Most successful intrusions still rely on depressingly normal weaknesses (unpatched internet-facing systems, weak authentication, exposed remote access, poor supplier controls). AI mainly makes attackers faster and more scalable, not magically brilliant. (IBM’s X-Force reporting is making the same point: AI accelerates exploitation and automation.) 

What government defences need to enforce (not just recommend):

  • Phishing-resistant MFA for privileged access (and ideally for anyone with access to sensitive systems).
  • Hard baselines for patching, endpoint hardening, secure configuration, and logging across government estates and government-adjacent suppliers.
  • Routine attack-surface management (continuous discovery of exposed services, misconfigurations, and risky identity setups).
3) Tighten the supply chain, especially the “IT plumbing” companies everyone forgets

A lot of modern compromises land through the organisations that run other organisations: MSPs, IT outsourcers, helpdesks, cloud and hosting providers.

The UK is already moving in this direction via the Cyber Security and Resilience Bill programme, which is designed to update and strengthen obligations (including incident reporting and regulator powers) for parts of the UK’s essential services and key digital/managed service supply chains. 

What needs to happen in practice:

  • Designation of “critical suppliers” (managed service providers and other key providers) with mandatory controls and auditability.
  • Faster, clearer incident reporting requirements, because early warning is one of the only ways to beat AI-accelerated intrusion chains. 
4) Treat identity as the perimeter (because the perimeter is mostly a myth)

AI helps attackers craft better pretexts and run faster social engineering loops. That makes identity security the frontline:

  • Stronger authentication (especially phishing-resistant methods).
  • Continuous identity monitoring (impossible travel, anomalous token use, privilege escalation patterns).
  • Aggressive privilege minimisation (less standing admin, more just-in-time access).

This is the unglamorous work that stops “quick and easy” from becoming “quick and catastrophic”.

AI Lab

5) Build AI-ready detection and response (SOC modernisation, not “buy a tool and pray”)

The NCSC warns AI will widen the gap between defenders who can keep pace and those who can’t. 
So government defence needs:

  • Centralised telemetry (endpoint, identity, cloud, email) with retention long enough for investigations.
  • Threat hunting as a routine capability (not an emergency hobby).
  • Automation for triage (SOAR-style playbooks) so analysts focus on the weird, not the repetitive.

AI will be used on defence too, but the win comes from good data + disciplined operations, not from sprinkling “AI” on a slide deck.

6) Security for AI systems themselves (because the UK is deploying AI everywhere)

As AI becomes embedded in public services and business processes, attackers will target:

  • model supply chains (data, dependencies, plugins),
  • identity and access to AI tools,
  • prompt injection / data exfiltration pathways (where AI becomes an accidental leak machine).

The NCSC specifically advises organisations to implement strong cyber security across AI systems and their dependencies and keep defences up to date. 
That means government needs standard patterns for:

  • safe deployment (sandboxing, least privilege, monitoring),
  • data governance (what can/can’t go into tools),
  • testing and red-teaming AI integrations.
7) Reduce the profitability of attacks (ransomware economics)

If criminals get paid, they come back. If they don’t, they look elsewhere. The UK’s policy direction is increasingly about limiting the ability of public bodies and critical operators to fund attackers and forcing better resilience through regulation. 
That needs to be paired with:

  • mandatory, tested backups and restore drills,
  • strong segregation to stop “one compromise = whole estate encrypted”,
  • rehearsed crisis playbooks across government and critical suppliers.

What “success” looks like in 2026–2027

Containment beats perfection

AI will raise the tempo. The realistic goal is:

  • Stop the bulk, fast (takedown, blocking, warnings at scale). 
  • Detect earlier in the kill chain (pre-ransomware alerts, identity anomalies, exploit activity). 
  • Make recovery routine (so incidents don’t become national embarrassments with month-long recovery tails).

The UK already has credible building blocks (especially ACD). The hard part is forcing consistent adoption across the public sector and the supply chain, because “voluntary best practice” has a long history of meaning “we’ll get to it after the next incident.”

Share