The most critical component of cyber risk remains undefined. This isn't a failure of effort or investment, but a fundamental challenge inherent to the nature of cybersecurity.
Let's unpack this.
The cybersecurity landscape is dominated by controls - firewalls, antivirus software, intrusion detection systems. These are essential, addressing known threats and mitigating a significant portion of cyber risk. However, they inevitably leave behind what we term "residual risk."
This residual risk emerges from various sources: financial constraints limiting access to cutting-edge tech, organisational immaturity in implementing advanced security measures, and the inherent limitations of current technology. But here's the crucial point: we've traditionally defined this residual risk negatively, as "everything our controls can't handle."
It's like trying to understand darkness by defining it as the absence of light.
Enter the human analysts in Security Operations Centers (SOCs). These experts are tasked with grappling with this ill-defined residual risk, serving as our last line of defence. They use their expertise, intuition, and contextual understanding to catch what automated systems miss.
But human analysts, despite their expertise, face inherent limitations. They can only process so much information, and their capacity doesn't scale with the ever-growing volume and complexity of cyber threats.
Now, we're witnessing a revolution in artificial intelligence, particularly in large language models. These systems are built on the same foundation as human expertise: language. This shared basis makes them potentially capable of handling tasks that, until now, only humans could manage, at least within specific vertical domains like cybersecurity.
At Qevlar, we see this as a unique opportunity to tackle the residual risk problem in an entirely new way. Our vision is to build the future of security controls - ones that can handle the remaining residual risk by leveraging the full potential of language models.
To achieve this ambitious goal, we're starting by developing productivity-enhancing SOC autonomous agents. This isn't just about making analysts more efficient. It's a strategic first step in a larger mission.
By creating AI agents that can perform SOC analyst tasks, we're doing something unprecedented: we're explicitly defining the residual cyber risk that can be handled by this new technology. We're giving concrete form to what was previously a vague, nebulous concept.
This is the key insight: by defining this risk, we create the opportunity to directly reduce it.
The potential impact is transformative. Imagine a SOC staffed not just with an infinite number of analysts working at incredible speed, but with consistently competent, rigorous, and tireless experts. That's the future we're building towards.
This approach represents the future of security controls. It's not about incremental improvements to existing systems. It's about leveraging AI to handle the complex, nuanced decisions that previously only humans could make.
By drastically reducing what constitutes residual risk, we're fundamentally changing the economics of cybersecurity. We're making it exponentially harder for attackers to find vulnerabilities while simultaneously alleviating the burden on human analysts.
At Qevlar, we're pioneering this new approach to cybersecurity. We're not just building another security tool; we're embarking on a journey to define and directly address the core challenge of residual risk. In a world where cyber threats are growing more sophisticated by the day, this level of ambition is not just desirable - it's necessary.
We're standing at the threshold of a new era in cybersecurity. The technological foundation is in place. The potential is clear. Now, it's time to turn this vision into reality, redefining the boundaries of what's possible in cybersecurity. This is not just an evolution of existing practices - it's a revolution in how we approach, define, and mitigate cyber risk.