The rise of accessible AI tools has lowered the barrier to entry for cybercrime, leading to a potential surge in sophisticated and increasingly difficult-to-detect attacks.
The rise of accessible AI tools has lowered the barrier to entry for cybercrime, leading to a potential surge in sophisticated and increasingly difficult-to-detect attacks. While fully automated AI-orchestrated attacks may still be on the horizon, the immediate threat lies in AI's ability to enhance existing scams and phishing campaigns, demanding a proactive and adaptive approach to cybersecurity.
AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED
For years, cybersecurity has been a cat-and-mouse game, with security professionals constantly evolving their defences to stay one step ahead of increasingly sophisticated threats. Now, the game is changing again. The rapid advancement and widespread availability of artificial intelligence (AI) – particularly large language models (LLMs) – are creating new opportunities for cybercriminals to automate, personalise, and scale their attacks in unprecedented ways. While some experts downplay the immediate risk of AI-powered cyber warfare, the reality is that AI is already amplifying existing threats, making online crime easier and potentially far more damaging. This blog post will delve into the key developments in this evolving landscape, explore the business implications, and offer insights from Epoch AI Consulting on how organisations can prepare for the AI-powered future of cybercrime.
The past year has seen a dramatic shift in the cybercrime landscape, driven by the accessibility of AI tools. Here’s a breakdown of the key developments:
In late 2025, cybersecurity researchers discovered a unique file on VirusTotal, a platform used to analyse suspicious software. Dubbed PromptLock, this ransomware employed LLMs at every stage of an attack. It could autonomously generate customised code, map a computer to identify sensitive data, and write personalised ransom notes based on the content of the files it targeted. This meant that each attack would be unique and harder to detect. While PromptLock turned out to be a research project designed to demonstrate the feasibility of AI-powered ransomware, it served as a stark warning about the potential for AI to automate and personalise attacks.
Even before the PromptLock discovery, cybercriminals were quick to adopt generative AI tools following the release of ChatGPT in late 2022. The initial wave of AI-powered attacks focused on creating spam and phishing emails. Microsoft reported blocking $4 billion worth of scams and fraudulent transactions in the year leading up to April 2025, attributing a significant portion to AI-generated content. Researchers estimate that at least half of all spam emails are now generated using LLMs.
Beyond simple spam, AI is increasingly being used to craft highly targeted email attacks, also known as spear-phishing. These attacks impersonate trusted figures to trick employees into divulging sensitive information or transferring funds. By April 2025, at least 14% of targeted email attacks were generated using LLMs, nearly double the percentage from the previous year. This increase highlights the growing sophistication and effectiveness of AI-powered phishing campaigns.
The rise of AI-powered cybercrime has significant implications for businesses of all sizes.
At Epoch AI Consulting, we believe that organisations need to take a proactive and adaptive approach to cybersecurity in the age of AI. It’s not just about deploying the latest security technologies, but also about upskilling your workforce, developing a comprehensive AI strategy, and implementing robust data governance practices.
First, teams need to understand how AI is being used in cyberattacks and how to identify and respond to these threats. Epoch AI offers tailored AI training programs designed to equip your employees with the knowledge and skills they need to defend against AI-powered cybercrime. These programs cover topics such as AI-powered phishing, deepfake detection, and prompt injection attacks.
Second, organisations need a clear AI strategy that addresses the cybersecurity risks associated with AI adoption. This includes developing policies and procedures for AI development and deployment, implementing robust data governance practices, and conducting regular risk assessments. Epoch AI's team of experienced AI consultants can help you develop a comprehensive AI strategy that aligns with your business goals and mitigates your cybersecurity risks.
Finally, organisations need to embrace AI-powered security solutions to enhance their defenses. Epoch AI can help you implement bespoke AI and automation processes, to build custom SaaS solutions that leverage AI to detect and respond to cyber threats in real-time. This includes tools for threat intelligence, anomaly detection, and automated incident response. Our embedded talent model means we can place experts within your organisation to build and maintain these solutions.
Ultimately, the best defence against AI-powered cybercrime is a combination of technology, training, and strategy.
The emergence of AI as a tool for cybercrime is a paradigm shift that demands a fundamental change in how organisations approach cybersecurity. While the threat of fully automated AI attacks may still be on the horizon, the immediate risk of AI-enhanced scams and phishing campaigns is very real. By taking a proactive and adaptive approach to cybersecurity, organisations can mitigate these risks and protect themselves from the evolving threat landscape. The future of cybersecurity is one where AI is both the weapon and the shield.