Cybersecurity
4/9/2026
·
0
Minutes Read

From SOAR to AI Agents: Rethinking Security Automation Through a CISO’s Risk Lens

Advisory Services
4/9/2026
·
0
Minutes Read
Renaud Cona
Find out more
table of contents
Share on
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Introduction

In early 2025, Summer Yue —Director of AI Alignment at Meta’s superintelligence lab — connected an AI agent (openclaw) to her email system. The instructions were clear: do not delete emails without explicit confirmation. What followed was a cautionary tale that every security leader should know.

After losing part of its instruction context, the agent began deleting messages on its own. Attempts to stop it remotely failed. Manual intervention was required.

This wasn’t a sci-fi scenario. It was a real operational failure, caused by a system that had been granted authority it couldn’t safely handle — and that couldn’t be stopped when things went wrong.

Security teams have long relied on automation to manage the growing complexity of operations. For years, this meant deterministic playbooks — reliable, auditable, and predictable. Today, AI agents promise to automate something far more ambitious: decision-making itself.

As SOCs face mounting pressure from alert volumes, tool sprawl, and operational dependencies, many organizations have doubled down on SOAR platforms to standardize incident response. At the same time, LLM-based agents are introducing a fundamentally different paradigm — one built on contextual reasoning, adaptive workflows, and investigative support.

I look at this shift from two angles: as a CISO accountable for security risk, and as the leader responsible for building incident response automation services within a cybersecurity consulting organization. That dual lens makes the “SOAR vs AI agents” debate more than academic. The real question isn’t which technology wins — it’s how each one reshapes the risk-opportunity balance of security operations.

 

Technology Landscape

The market for security automation is fragmenting fast.

On the structured automation side, platforms like Splunk SOAR, Cortex XSOAR, or Tines orchestrate security tools and execute predefined response workflows. Their value lies in predictability, governance, and auditability — making them well suited for organizations looking to industrialize incident response at scale. Open-source alternatives such as Shuffle, The Hive with Cortex offer comparable flexibility for teams that prioritize infrastructure control.

In parallel, a new generation of AI-driven platforms is emerging. Vendors like OpenAI, Google, and Microsoft —alongside specialized tools — are building systems capable of analyzing large datasets, synthesizing threat context, and assisting analysts during live investigations.

Despite surface-level similarities, these two categories operate on entirely different principles:

•      SOAR platforms execute predefined workflows —deterministic, bounded, auditable.

•      AI agents interpret context and generate responses dynamically — adaptive, powerful, and harder to constrain.

That distinction matters more than most product comparisons suggest.

 

Concrete Use Cases

Understanding where each technology delivers real value helps avoid both over-engineering and missed opportunities.

SOAR is particularly effective for:

•      Automated endpoint isolation following a confirmed compromise

•      Alert enrichment with threat intelligence context (IP reputation, file hash lookups)

•      Automated ticket creation and routing in ITSM platforms

•      Phishing triage workflows: extract indicators, query sandboxes, notify users

•      Compliance reporting and evidence collection for audits

AI agents show the most promise for:

•      Complex phishing investigation requiring multi-source correlation

•      Incident summarization and first-draft post-mortem reports

•      Threat hunting support: hypothesis generation and log interpretation

•      Natural language querying of security data during live incidents

•      Analyst onboarding and knowledge transfer in complex environments

The clearest opportunities emerge when both are combined: AI agents surface insights and recommend actions, while SOAR platforms execute the validated response within governed, auditable workflows.

 

How they (could) work together

The most powerful architectures don’t choose between SOAR and AI agents — they connect them.

A practical integration model looks like this:

Detect→ Enrich → Recommend → Act → Document

1.    A SIEM alert triggers an automated enrichment workflow via SOAR

2.    Enriched context (affected assets, user history, threat intel) is passed to an AI agent

3.    The agent analyzes the context, identifies patterns, and recommends a response path

4.    A human analyst reviews and validates the recommendation

5.    SOAR executes the approved playbook and documents the actions taken

In this model, AI agents handle the cognitive heavy lifting — correlation, hypothesis generation, narrative synthesis — while SOAR maintains execution control and auditability. Human analysts retain decision authority at the critical inflection point.

This isn’t a future architecture. Teams are already building this today.

 

The Risk Landscape: both technologies have limits

A balanced evaluation requires acknowledging the operational risks on both sides.

SOAR limitations

SOAR platforms are only as good as the playbooks behind them. Poorly maintained workflows create a false sense of coverage — teams believe they are protected, while the automation silently fails to address emerging threat patterns. Rigid playbooks struggle to adapt to novel attack techniques, and over time, complex automation environments accumulate technical debt that makes maintenance costly and changes risky. Integration failures across tools are also common and rarely visible until an incident exposes them.

AI agent risks

Loss of Operational Control. As illustrated in the introduction, autonomous systems don’t always stop cleanly —and when they don’t, manual intervention may be the only option. For a CISO, this is the core concern: once an autonomous system has operational authority, overriding it may not be immediate, or even possible. In a security context, where speed and precision are critical, that loss of control can have serious operational consequences.

Prompt Injection. AI agents process natural language from emails, documents, and web pages. Attackers can embed malicious instructions within legitimate-looking content, silently redirecting the agent’s behaviour without triggering traditional detection mechanisms.

Tool Abuse. Most AI agents interact with enterprise systems via APIs. Overprivileged agents, or agents operating under manipulated instructions, can trigger high-impact actions — modifying permissions, disabling controls, or deleting operational data.

Explainability and Governance. Unlike rule-based automation, AI agents rely on probabilistic reasoning that is difficult to reconstruct after the fact. This creates real challenges for audits, incident post-mortems, and regulatory compliance.

Data Exposure. Contextual reasoning requires broad data access. Without tight governance, sensitive operational information may flow into AI pipelines with insufficient controls.

 

A Decision Framework for CISOs

When evaluating where each technology fits, four dimensions should guide the analysis:

Dimension SOAR AI Agents
Predictability Controlled, deterministic execution Adaptive, context-driven reasoning
Governance Strong auditability Complex oversight requirements
Risk profile Primarily configuration errors and playbook debt New surfaces: prompt injection, tool misuse, loss of control
Innovation potential Operational efficiency at scale Transformative analytical capabilities

Neither technology is inherently superior. The right choice depends on where an organization sits on the maturity curve.

 

Diagnostic: Where Does Your Organization Stand?

Before selecting a technology, assess your current operational baseline. Answer the following questions honestly:

Foundational readiness

•      Are your incident response processes documented, stable, and consistently followed?

•      Do you have a governed API access management framework in place?

•      Does your team have hands-on experience maintaining automation workflows?

Governance maturity

•      Can you audit and reconstruct automated actions taken during an incident?

•      Do you have clearly defined boundaries for what systems are authorized to do autonomously?

•      Is your data classification and access control framework mature enough to govern AI data access?

Innovation appetite

•      Have you identified specific use cases where deterministic playbooks are reaching their limits?

•      Is your team equipped to evaluate and monitor probabilistic AI outputs in an operational context?

•      Do you have a structured process for running controlled POCs on emerging technologies?

How to interpret your answers:

Mostly no → Prioritize SOAR. Build the operational and governance foundation first.

Mixed → Start with SOAR for execution; explore AI agents in advisory or read-only mode.

Mostly yes → You have the maturity to introduce AI-assisted capabilities progressively.

 

A Maturity Model for Security Automation

The most effective strategy isn’t choosing between SOAR and AI agents — it’s knowing when each one belongs in your architecture.

Stage 1 — Establish control with SOAR. For organizations still building or standardizing incident response processes, deterministic playbooks are the right foundation. They create accountability, reduce variability, and make automation visible and auditable.

Stage 2 — Augment with AI assistance. Once processes and governance are solid, AI agents can be introduced in a supporting role — accelerating investigations, surfacing contextual insights, and helping analysts navigate complex incidents faster.

Stage 3 — Delegate carefully and selectively. Over time, specific low-risk, well-scoped tasks may be delegated to AI-driven workflows. But this step requires clearly defined operational boundaries, robust monitoring, and an explicit decision about what stays under human control.

In practice, this produces a layered model:

•      Human analysts retain responsibility for critical decisions

•      SOAR platforms orchestrate controlled, auditable automation

•      AI agents augment investigation, analysis, and triage

AI agents shouldn’t be the starting point. They should be the next step — earned through operational maturity.

 

Practical Next Steps

Regardless of where your organization stands today, there are concrete actions worth taking now:

1. Audit your existing automation before adding new capabilities. Identify which playbooks are actively maintained, which are outdated, and where coverage gaps exist. Automation debt is a risk in itself.

2. Define explicit autonomy boundaries. Document what automated systems — both SOAR and AI — are authorized to do without human approval. The absence of these boundaries is one of the most common sources of operational incidents.

3. Identify two or three low-risk AI use cases for a first POC. Good starting points: incident summarization, log interpretation support, or first-draft post-mortem generation. These deliver value without granting operational authority.

4. Build AI governance before you need it. Define data access policies, output validation requirements, and monitoring expectations for AI systems before they go into production — not after.

5. Involve your team early. The analysts who will work alongside these systems are the best judges of where AI assistance is genuinely useful versus where it creates noise. Their input shapes adoption.

 

Conclusion

AI agents represent a meaningful evolution in security operations. But they are not a replacement for traditional automation — they are a complement to it, and a demanding one.

For CISOs, the challenge is balancing the pull of innovation against the need for operational control. Organizations that build disciplined automation foundations first — and then progressively introduce AI-assisted capabilities — will be better positioned to capture the upside while limiting the exposure.

Security automation should first establish control. Only then should organizations consider delegating intelligence.

Where does your organization stand in this evolution? Are you still building your SOAR foundation, or have you started experimenting with AI agents in your SOC? Contact Kudelski Security to see how we can help.

Related Post