Back to Industry Insights

AI is already making online crimes easier. It could get much worse.

The rise of accessible AI tools has lowered the barrier to entry for cybercrime, leading to a potential surge in sophisticated and increasingly difficult-to-detect attacks.

Executive Summary

The rise of accessible AI tools has lowered the barrier to entry for cybercrime, leading to a potential surge in sophisticated and increasingly difficult-to-detect attacks. While fully automated AI-orchestrated attacks may still be on the horizon, the immediate threat lies in AI's ability to enhance existing scams and phishing campaigns, demanding a proactive and adaptive approach to cybersecurity.

Related Video

AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

Introduction

For years, cybersecurity has been a cat-and-mouse game, with security professionals constantly evolving their defences to stay one step ahead of increasingly sophisticated threats. Now, the game is changing again. The rapid advancement and widespread availability of artificial intelligence (AI) – particularly large language models (LLMs) – are creating new opportunities for cybercriminals to automate, personalise, and scale their attacks in unprecedented ways. While some experts downplay the immediate risk of AI-powered cyber warfare, the reality is that AI is already amplifying existing threats, making online crime easier and potentially far more damaging. This blog post will delve into the key developments in this evolving landscape, explore the business implications, and offer insights from Epoch AI Consulting on how organisations can prepare for the AI-powered future of cybercrime.

Key Developments

The past year has seen a dramatic shift in the cybercrime landscape, driven by the accessibility of AI tools. Here’s a breakdown of the key developments:

The PromptLock Case: A Glimpse into the Future

In late 2025, cybersecurity researchers discovered a unique file on VirusTotal, a platform used to analyse suspicious software. Dubbed PromptLock, this ransomware employed LLMs at every stage of an attack. It could autonomously generate customised code, map a computer to identify sensitive data, and write personalised ransom notes based on the content of the files it targeted. This meant that each attack would be unique and harder to detect. While PromptLock turned out to be a research project designed to demonstrate the feasibility of AI-powered ransomware, it served as a stark warning about the potential for AI to automate and personalise attacks.

AI-Enhanced Spam and Phishing

Even before the PromptLock discovery, cybercriminals were quick to adopt generative AI tools following the release of ChatGPT in late 2022. The initial wave of AI-powered attacks focused on creating spam and phishing emails. Microsoft reported blocking $4 billion worth of scams and fraudulent transactions in the year leading up to April 2025, attributing a significant portion to AI-generated content. Researchers estimate that at least half of all spam emails are now generated using LLMs.

Targeted Attacks: A Growing Threat

Beyond simple spam, AI is increasingly being used to craft highly targeted email attacks, also known as spear-phishing. These attacks impersonate trusted figures to trick employees into divulging sensitive information or transferring funds. By April 2025, at least 14% of targeted email attacks were generated using LLMs, nearly double the percentage from the previous year. This increase highlights the growing sophistication and effectiveness of AI-powered phishing campaigns.

Business Implications

The rise of AI-powered cybercrime has significant implications for businesses of all sizes.

  • Increased Frequency and Sophistication of Attacks: AI lowers the barrier to entry for cybercriminals, allowing less experienced attackers to launch more sophisticated attacks. This leads to a higher volume of attacks and makes it more difficult for organisations to defend themselves.
  • Greater Personalisation and Evasion: AI enables attackers to craft highly personalised attacks that are more likely to trick victims. LLMs can analyse a victim's online presence, social media activity, and email communications to create convincing impersonations and tailored messages. Moreover, AI can generate polymorphic malware that changes its code with each iteration, making it harder to detect by traditional antivirus software.
  • Strained Cybersecurity Resources: The increased volume and sophistication of AI-powered attacks can overwhelm cybersecurity teams, straining their resources and making it harder to detect and respond to threats effectively.
  • Increased Financial and Reputational Risks: Successful cyberattacks can lead to significant financial losses, including ransom payments, legal fees, and recovery costs. They can also damage a company's reputation and erode customer trust.
  • The Human Element Becomes More Vulnerable: Traditional security measures such as firewalls and antivirus software are becoming less effective against AI-powered attacks. The human element is now the weakest link, as attackers increasingly rely on social engineering techniques to trick employees into making mistakes.

The Epoch AI Perspective

At Epoch AI Consulting, we believe that organisations need to take a proactive and adaptive approach to cybersecurity in the age of AI. It’s not just about deploying the latest security technologies, but also about upskilling your workforce, developing a comprehensive AI strategy, and implementing robust data governance practices.

First, teams need to understand how AI is being used in cyberattacks and how to identify and respond to these threats. Epoch AI offers tailored AI training programs designed to equip your employees with the knowledge and skills they need to defend against AI-powered cybercrime. These programs cover topics such as AI-powered phishing, deepfake detection, and prompt injection attacks.

Second, organisations need a clear AI strategy that addresses the cybersecurity risks associated with AI adoption. This includes developing policies and procedures for AI development and deployment, implementing robust data governance practices, and conducting regular risk assessments. Epoch AI's team of experienced AI consultants can help you develop a comprehensive AI strategy that aligns with your business goals and mitigates your cybersecurity risks.

Finally, organisations need to embrace AI-powered security solutions to enhance their defenses. Epoch AI can help you implement bespoke AI and automation processes, to build custom SaaS solutions that leverage AI to detect and respond to cyber threats in real-time. This includes tools for threat intelligence, anomaly detection, and automated incident response. Our embedded talent model means we can place experts within your organisation to build and maintain these solutions.

Ultimately, the best defence against AI-powered cybercrime is a combination of technology, training, and strategy.

Conclusion

The emergence of AI as a tool for cybercrime is a paradigm shift that demands a fundamental change in how organisations approach cybersecurity. While the threat of fully automated AI attacks may still be on the horizon, the immediate risk of AI-enhanced scams and phishing campaigns is very real. By taking a proactive and adaptive approach to cybersecurity, organisations can mitigate these risks and protect themselves from the evolving threat landscape. The future of cybersecurity is one where AI is both the weapon and the shield.

Want to explore how AI can work for your business?

At Epoch AI Consulting, we help organisations navigate AI strategy, upskill teams, and deliver bespoke AI and data solutions. Get in touch to see how we can help.