AI

The Intelligence Layer: The Missing Piece Every SOC Has Been Waiting For

Qevlar AI team
The Intelligence Layer: The Missing Piece Every SOC Has Been Waiting For

For more than two decades, security platforms have promised to connect the dots. SIEMs were built to correlate, surface the attack hiding in the noise, and give analysts a fighting chance against adversaries who never operate in straight lines. And yet, when a real intrusion unfolds, analysts still find themselves rebuilding the timeline by hand, discovering signals that were always there but never surfaced.

In this interviewt Ahmed Achchak, CEO and co-founder of Qevlar AI, sits down with Raffael Marty, a cybersecurity pioneer with over 25 years in the field and early roles at ArcSight and Splunk. Together, they dig into why correlation has consistently fallen short, what actually needs to change, and what the shift to agentic AI means for the future of security operations.

Key Takeaways

  • Early SIEMs correlated events with the host state, but that capability quietly disappeared
  • Layering AI on top of a biased signal doesn't fix the signal. You first have to understand what you're not seeing.
  • The intelligence gap is organizational and knowledge-based. SOC analysts cannot be expected to master every data source in isolation.
  • The right architecture combines a graph-based context layer with LLMs doing what they do best: semantic analysis and enrichment.
  • The pendulum needs to swing back from detection toward protection, and agentic AI is what makes that possible.

The Promise That Quietly Disappeared

Ahmed Achchak: SIEMs were initially designed as a correlation layer — that's core to the value proposition. Yet even today, most post-mortems still include a moment where someone reconstructs the attack chain by hand. They rebuild the timeline, realize the alerts didn't tell the full story. From your perspective, what went missing? Is it a query problem, a data issue, a reasoning engine problem?

Raffael Marty: Historically, when the SIEM space started 25 years ago, we built very event-driven systems. Initially, we actually built SIEMs because we had intrusion detection alerts with a lot of false positives, and we didn't know how to figure out which ones were true positives. So we started pulling in vulnerability data from end hosts and mashing that up — that was the first real correlation, not just between events but between events and host states.

It went well, but it disappeared very quickly. The whole state machine, the asset inventory inside SIEMs — gone. And everybody shifted to just correlating events with events, locked into this time-series focused view of the world.

Fast forward to today, and we're finally thinking differently. People are talking about context graphs, which, interestingly, we were already attempting in the early 2000s — asset models, user models — but without the tooling to make it work. With LLMs, faster databases, and cloud computing, we can now move beyond event correlation into a real reasoning layer. One that assigns meaning to events based on context, not just adjacency. That's where we can start managing incidents, doing root cause analysis, supporting cleanup — instead of just flagging an event and leaving an analyst to do all the heavy lifting manually.

The Biased Signal Problem

Ahmed Achchak: When we say a security team needs to see alerts as part of a wider campaign, what does that actually look like operationally? Humans aren't built for pattern recognition across weeks or months of noisy data. How are analysts supposed to approach that with or without AI support?

Raffael Marty: I want to flag something I'm seeing frequently out there. As an advisor to a number of PE firms, I've looked at a lot of these AI SOC companies and new SIEM players. And a number of them are trying to patch the problem by building a layer on top of existing SIEMs. That's a fundamental issue because you're dealing with a biased signal you don't control. An alert that was configured by someone else, in ways you don't fully understand, giving you no visibility into what you're not seeing. You can't fix something that is already inherently bad.

So the first step is understanding the signals being generated as alerts, and then being able to go back to the raw telemetry and intelligence flowing underneath. From there, if you can associate alerts with campaigns or adversaries — matching TTPs, tracking source patterns, correlating attack properties — you get a real leg up. You can ask: what is the objective of these actors? What are they known for? Where should I be hunting next? Campaign context turns reactive investigation into directed pursuit.

Can Campaign Thinking Be Systematized?

Ahmed Achchak: Do you think this kind of campaign-level thinking can be systematized? Is there a future where AI handles it, or is it something only the most experienced analysts can do?

Raffael Marty: I'll be honest, this is the first time I'm really thinking through what campaign-focused analytics would look like architecturally. What I see being done today is hypothesis-driven: you formulate hypotheses based on threat intelligence — these adversaries behave this way, they look for these things — and you use that to guide hunting. That's the closest to a campaign-oriented approach in practice.

What's emerging now is automated hunting agents that formulate hypotheses on their own, learning from what they observe and what's being discussed in the broader community. If I had to sketch a taxonomy on the fly: event → IOC → TTP → campaign → adversary. Each level is a different way of expressing adversary behavior, and campaigns can probably be encapsulated as clusters of TTPs. That makes sense as a search framework.

The Institutional Knowledge Problem

Ahmed Achchak: We see organizations with access to threat intelligence feeds that aren't actually acting on them during investigations. An alert fires at 2 a.m., the second shift has to act fast — they don't have time to gather tribal knowledge, institutional context, the full IOC picture. What's the mechanism that determines whether past context gets applied? How do we fast-track that?

Raffael Marty: First, threat intelligence is not just IOCs. But where it is IOCs, the system should automatically enrich your data and surface them. Where it becomes TTPs, same thing — the system should find those TTPs in your environment without an analyst having to go looking.

The more interesting layer is capturing intelligence from analysts themselves. When I was at ArcSight, I kept thinking: analysts take an event, mark it as a false positive, so why can't we use that for reinforcement learning? The tooling wasn't there. Now it's going much further. It's not just a single event decision. It's an entire investigation. Can we encode the reasoning behind an investigation as structured knowledge that accumulates over time, is collaborated on, and becomes a context layer that enriches future work? That's the shift: from static rules to living institutional memory.

Ahmed Achchak: That resonates with what we're seeing at Qevlar. One of the features customers value most is the ability of the system to generate its own context. Once it investigates enough alerts, it builds what we call historical context. It learns patterns. It sees things that the analyst might not have had time to notice. Sometimes the AI can surface context that the analyst wouldn't have reached on their own, or would have reached much later. The feedback loop goes both ways: the AI learns from the analyst, but it also gives the analyst visibility they didn't have before.

Raffael Marty: That's the intelligence that builds on top of LLM-based reasoning. But it's only as good as the context it operates in. If the system doesn't understand context, it starts abstracting in the wrong direction, flagging something as critical when it's on a dev server, when the same signal on a financial system would mean something entirely different. Security profiles matter. Context is everything.

Why Cross-Environment Visibility Is Still So Hard

Ahmed Achchak: Attacks now span environments: endpoint, network, email, cloud, etc. Adversaries operate across all these boundaries simultaneously, while defenders work in silos. What is the technical reason that cross-environment campaign visibility remains so rare?

Raffael Marty: Interestingly, I don't think the primary barrier is technical. A well-configured SIEM at a Fortune 2000 company gets feeds from network, endpoint, email, cloud — all the major sources. The problem is that modern SIEM architectures have drifted toward single-source correlations. I don't understand why. A signal is a signal. Whether it comes from email, the endpoint, or the network, it needs to be correlated with other signals across the board.

But then you hit the people problem. Email is often managed by IT. Firewalls and IDS are managed by security. The knowledge is fragmented, and so is the contribution. And then there's the knowledge problem itself — being a SOC analyst who genuinely understands how a SIEM works internally, and how it interacts with a WAF, an IDS, a firewall, and an application layer firewall, and then adding business context on top of that — understanding what the organization actually does, what's critical, what processes are running behind a given system — that's genuinely hard. Not many people can do it. But now, for the first time, the systems themselves can help bridge that knowledge gap.

What Changes First When the Intelligence Layer Arrives

Ahmed Achchak: Imagine the intelligence layer is in place. What parts of the SOC model change first? What's the first organizational impact?

Raffael Marty: We can't get around it anymore — the intelligence layer is not optional. Without it, too many processes stay inefficient. And once it exists, I think we finally start moving away from an alert-centric view of security operations.

What I've been advocating for a long time is a risk-based SOC. Take all the alerts, all the telemetry, and rate every entity in the network based on what's happening around it. You always know the current risk state. Investigations start from the highest-risk entities — anything crossing a threshold gets the full context treatment. You understand what's happening around that asset, what it connects to, why it matters. That model makes sense. Risk scoring is hard — it's a topic on its own — but it's the right direction and I'm surprised more people aren't moving toward it.

The second thing I expect — and maybe two or three years is aggressive, but I'll say it anyway — is that the conversation around investigation starts to disappear. Because what we actually need to get back to is automatic response and protection. Historically, in the early 2000s, it was all about protection. Block everything. Then we shifted to detection. Now we need to close the loop: learn from detection and feed it back into protection. The pendulum needs to swing back. The way we're doing it today isn't working.

Ahmed Achchak: I agree completely. We have a broken model, and as you said, we've been monkey-patching it rather than redesigning from first principles. When you look at defensive security operations from the outside, it just seems so inefficient — and you wonder how the decisions were made that led us here.

Raffael Marty: I think what happened is we built systems that assumed homogeneity. You buy a Splunk or an ExaBeam, it ships with a rule set, and the assumption is those rules work the same in every environment. They don't. You end up bringing in expensive consultants to rebuild rule sets for your specific context. That doesn't scale. We have to move to something more intelligent. For the first time, we actually have the technology to do it.

The Fire Round

Ahmed Achchak: A few quick questions — answer with whatever comes to mind first. A CISO tells you their SOC is AI-ready. What's the first question you ask to find out whether that's actually true?

Raffael Marty: What percentage of analyst time is still spent collecting information rather than making decisions? If context isn't already available, you're not ready.

Ahmed Achchak: Five years from now — does the tiered SOC model still exist?

Raffael Marty: Tier 1 is already gone, it just doesn't know it yet. Tier 2 and Tier 3 will probably merge as well. We need to reduce headcount in the SOC overall, and tiers are a structure built for a world where humans have to handle every alert. That world is ending.

Ahmed Achchak: In an AI-native SOC, what's the one skill that separates great analysts from average ones?

Raffael Marty: Creativity — and with it, cross-domain thinking. You need people who can understand all the different components quickly and come up with approaches the attacker didn't anticipate. Attackers are creative. Defenders need to be too.

🎧 Listen to the full episode on Spotify

🍏 Listen to the full episode on Apple Podcasts

See how much of your manual workload can be automated

Book a demo call with us
Cross form
Book a demo call with us
Cross form