If you missed our recent ModernCISO webinar on AI-generated phishing and social engineering, this blog summarizes the key takeaways. It explores how threat actors are using AI to scale attacks and what security leaders must do to stay ahead.

When Your CEO Calls, and It’s Not Really Them

In early 2024, Ferrari narrowly avoided a high-stakes cyber heist. A deepfake impersonation of their CEO was used in a sophisticated attempt to deceive employees and authorize fraudulent transactions. The voice was convincing. The video appeared real. But something didn’t feel right, and someone had the presence of mind to verify. That small moment of skepticism may have saved the company millions.

This wasn’t a tech demo or a theoretical risk. It was a real-world attack that used deepfake technology to exploit trust at the highest levels of leadership. And it is far from an isolated incident.

Today’s threat actors can clone voices, replicate faces, and mimic communication styles with remarkable accuracy. AI-generated phishing emails match tone and context so convincingly they pass for genuine internal messages. Voice synthesis tools can recreate a senior executive’s speech pattern with just a few minutes of audio. In some cases, attackers combine all three – email, video, and voice – into coordinated, multi-channel campaigns that are nearly impossible to detect in the moment.

This is the new reality for CISOs.
Today’s attacks talk like your CEO, write like your team, and feel like business as usual, until it’s too late.

The AI Threat Landscape Is Evolving Fast

Artificial intelligence is rewriting the rules of cybersecurity. While defenders explore AI to enhance detection and response, attackers are leveraging the same technologies to scale, automate, and personalize their campaigns with unprecedented speed and precision.

Defensively, most organizations are falling behind. Gartner reports that while a growing number of companies plan to adopt AI within the next year, actual deployment remains low, just 2 to 5 percent have AI in production today.

The gap between ambition and readiness is widening. AI projects often begin without proper oversight, and vulnerabilities in open-source libraries go unaddressed. Shadow AI is proliferating, as employees experiment with public models using sensitive data, frequently outside the purview of IT or security teams. Governance frameworks remain underdeveloped, leaving organizations exposed at multiple levels.

This is not a technical oversight. It is a strategic risk that demands executive ownership, policy alignment, and urgent investment in AI-aware security practices.

The Cybercriminal’s New Toolbox

The criminal underground has always been quick to adopt new technologies but with AI, they’ve taken a quantum leap. What used to require technical skill, time, and effort can now be executed by almost anyone with the right toolkit. And these toolkits are increasingly built on AI.

Let’s take a closer look at the malicious AI tools driving today’s attacks:

🧠 Malicious Language Models

These AI chatbots mimic the functionality of tools like ChatGPT but are optimized for criminal use. They can instantly generate highly persuasive phishing emails, write malware scripts, and help craft BEC (Business Email Compromise) messages, all while avoiding common red flags that would trigger traditional filters. The result is phishing at scale, with unprecedented customization and believability.

🎭 Deepfake Video Impersonation

These tools use AI to swap faces in video footage with alarming realism. Threat actors are now using them to impersonate executives in video messages or virtual meetings. Imagine a CFO receiving a deepfake video of their CEO approving a wire transfer, it’s no longer science fiction. The potential for manipulation in high-trust environments like finance, legal, or healthcare is enormous.

🔊 Voice Cloning Tools

AI-driven speech synthesis platforms can recreate a person’s voice with just a short audio sample. Attackers use these tools to conduct voice phishing (“vishing”) attacks that sound exactly like a known individual. In recent real-world cases, employees have transferred funds based on phone calls they thought were from their CEO or finance director.

💣 AI-Powered Extortion Kits

This dark web toolkit streamlines digital blackmail campaigns. Using AI, it can generate threatening messages, spoof identities, and even create compromising fake content. It allows bad actors to automate social engineering, scale up harassment, and increase the psychological pressure on targets.

🌐 Phishing-as-a-Service Platforms

These platforms offer ready-made phishing pages that mimic legitimate login portals (e.g., Microsoft 365, Google Workspace), and now incorporate AI to tailor the look, feel, and timing of attacks to maximize success. They bypass MFA, harvest credentials, and offer subscription models for continuous use, ushering in a new era of low-barrier credential theft.

AI Regulation Is Coming, and It’s Coming Fast

While AI has already transformed the threat landscape, the regulatory response is just beginning to catch up. But the signs are clear: AI-specific cybersecurity mandates are no longer theoretical. They’re imminent, and in many cases, already here in disguised form.

🏛️ EU AI Act: Setting the Global Benchmark

The European Union’s AI Act, particularly Article 15, introduces specific requirements around the accuracy, robustness, and cybersecurity of AI systems. High-risk applications, including those used in critical infrastructure, law enforcement, and financial services, must undergo rigorous risk assessment, testing, and monitoring. For CISOs, this means proactively ensuring that AI systems are not just effective but also explainable, secure, and resilient to manipulation.

Once enacted (expected late 2025 or early 2026), the AI Act will set a de facto global standard. Multinational organizations operating in or with the EU will need to demonstrate compliance, making early alignment a strategic advantage.

🛡️ NIS2 Directive: Broader Cyber Accountability

While not AI-specific, the NIS2 Directive (in force from October 2024) significantly raises the bar for cyber governance across essential and important entities in the EU. It indirectly covers AI through its call for secure deployment of emerging technologies, enhanced supply chain scrutiny, and the need for incident response integration. The takeaway: if your AI tools support business-critical functions, NIS2 considers them part of your threat surface, and expects them to be treated as such.

💶 DORA (Digital Operational Resilience Act): Financial Sector Focus

In the financial sector, DORA takes a similarly proactive stance. It demands operational resilience across the digital value chain, including third-party ICT providers. If your firm uses AI for fraud detection, transaction monitoring, or decision-making, DORA compels you to assess and secure those systems as part of your core resilience planning.

🇩🇪 Germany’s BSIG and IT-SiG 2.0: Sector-Specific Security

Germany’s cybersecurity laws, BSIG and IT-Sicherheitsgesetz 2.0, continue to evolve. While they don’t name AI explicitly, their focus on attack detection systems and requirements for critical infrastructure operators (KRITIS) make AI-reliant services fair game. If your AI models influence decision-making or support security operations, expect increased regulatory expectations on data integrity, model transparency, and detection efficacy.

What CISOs Need to Do Now

AI-driven threats aren’t a “future problem.” They’re here, they’re active, and they’re evolving by the day. For CISOs, that means it’s time to move beyond theory and take concrete, strategic action.

Start by weaving AI risk into the fabric of your cybersecurity strategy. This isn’t just another tech trend to bolt on. It affects everything from identity management and threat detection to supply chain security and governance. Treat AI as a new attack surface and build that mindset into your frameworks, policies, and roadmaps.

Your AI development lifecycle also needs to be secured. That includes protecting training datasets, vetting third-party models, and ensuring that every AI deployment, whether internal or embedded in a vendor platform, has undergone proper threat modeling. If your SOC is already using AI to detect threats, it should also be capable of identifying when AI becomes the threat.

At the same time, incident response plans must be updated. AI-specific attack scenarios such as deepfake impersonation or model poisoning need to be considered, tested, and included in tabletop exercises. You don’t want your first experience with an AI-enabled breach to be the real thing.

Awareness training also needs a refresh. Standard phishing simulations are no longer enough. Employees must learn to recognize AI-crafted messages and stay alert to real-time voice scams, synthetic video, and other emerging forms of social engineering.

And let’s not forget the human factor. Employees, especially those with privileged access, must learn to verify first and act second. A message from the CFO? Confirm it. A sudden wire transfer request? Call directly using a known number. Promote caution, support swift reporting, and reinforce the value of hesitation when something feels off.

In short:
Secure the tech. Train the people. Strengthen governance.
AI attackers are moving fast. You need to move faster, and smarter.

AI Attackers Won’t Wait, and Neither Should You

The speed and sophistication of AI-driven cybercrime is increasing daily. For security leaders, matching that pace requires more than just updated tools. It demands a strategic shift, one that strengthens governance, modernizes training, and embeds AI awareness into every layer of the organization.

This isn’t about fighting fire with fire. It’s about building smarter, more adaptive defenses that evolve as fast as the threats they’re designed to stop.

If you’re reassessing how to prepare your teams and systems for the next wave of AI-powered attacks, Kudelski Security is here to help. Our advisory team works with CISOs to build resilience against emerging risks, from awareness training to AI-specific governance frameworks.

Contact our team today to discuss how we can support your AI security strategy.

 

Bookmark