After Anthropic’s Claude Mythos reveal, analysis warns AI’s bigger threat is speed and scale of old tactics

Fears of self-directed cyberattacks surged this month after Anthropic announced on April 7, 2026, that its Claude Mythos model autonomously identified and exploited zero‑day vulnerabilities in every major operating system and web browser, chaining bugs into sophisticated sequences, writing a browser exploit, and developing a remote code execution exploit.
Yet a new analysis argues the more immediate danger looks less like sci‑fi and more like a volume problem: generative AI is making known attack playbooks faster, cheaper, and accessible to more operators. The piece contends that the global rise in AI‑enabled activity is striking but often misunderstood.
CrowdStrike reported an 89 percent increase in attacks by AI‑enabled adversaries from 2024 to 2025, but most relied on familiar tactics, techniques, and procedures rather than novel methods. In this view, AI’s primary contribution today is efficiency—adding speed, volume, and noise—especially in reconnaissance and delivery.
According to the analysis, large language models can rapidly aggregate and parse open‑source information, allowing adversaries to map organizations, study publicly disclosed vulnerabilities, and spot entry points far faster than human operators.
Microsoft Threat Intelligence has observed North Korean actors using large language models to research public vulnerabilities and profile high‑value targets, improving their understanding of technical details and attack vectors.
More broadly, attackers now begin scanning for newly disclosed flaws within minutes of a Common Vulnerabilities and Exposures (CVE) announcement, often before defenders have read the advisory, compressing the response window. On delivery, the shift is one of execution, not invention.
Generative AI can draft grammatically clean, context‑aware phishing at scale with minimal operator input. Microsoft has found targets are 4.5 times more likely to click on AI‑generated phishing emails than on traditionally crafted ones.
Tools advertised on dark‑web forums claim end‑to‑end support for phishing campaigns, and leaked internal chats from the ransomware group Black Basta show members discussing the use of ChatGPT to write phishing messages. The analysis says not all adversaries benefit equally.
While nation‑state actors engaged in espionage—such as Russia and China—have adopted generative AI to enhance reconnaissance, craft phishing content, and conduct influence operations, these uses represent refinements to capabilities rather than a fundamental shift.
The argument posits that mid‑tier actors stand to gain the most as AI levels skills and lowers costs. The takeaway is a practical one: defenders who fixate on hypothetical AI‑powered zero‑days while neglecting AI’s acceleration of established techniques risk preparing for the wrong fight.
The analysis urges a recalibration toward rapid patching, faster threat triage, and counter‑phishing measures tailored to a world where attackers move in minutes, not days.
