It’s always been challenging for cybersecurity teams to keep up with the evolving threat landscape. But artificial intelligence (AI) has introduced new problems.
Attacks are now faster and easier than ever to execute at scale, with hackers using AI to reduce their overhead and simplify various processes. For example, it can be used as a shortcut to find weaknesses in systems and applications, create phishing and malware attacks, and gather and analyze massive amounts of data.
But it’s not all bad news.
AI can just as effectively be used for defense by blue teams as it can by hackers to launch offensive attacks, as Hamza Sayah demonstrated during a recent talk at the Hacking Lab.
Keep reading to find out how…
To understand how AI can be used to combat attacks, we first have to examine how it’s being used by bad actors. According to a recent report from the National Cyber Security Centre (NCSC):
But which TTPs will be enhanced the most by AI? According to the NCSC, phishing, social engineering, and exfiltration.
This is further confirmed by Microsoft and OpenAI, who recently published research into the ways in which nation-backed groups are using large language models (LLMs) to research targets, improve scripts, and to help build social engineering techniques.
During his talk at the Hacking Lab, Hamza detailed exactly how these adversaries might use AI.
Let’s first imagine how a hacker might carry out a social engineering attack without the help of AI.
They’d start by conducting in-depth manual research – via social media, for example – or scraping databases on the Dark Web to identify a group of targets. They’d then draft an email rife with spelling and grammar errors (yes, this is done on purpose…), send it out, and wait for a few to “bite”.
If and when any of the recipients respond, the hacker would then have to manually respond, building trust with the target over a series of emails to – eventually – get them to transfer money, click a malicious link, or share sensitive information.
This last part is especially time-consuming, because the success of the social engineering attack relies on some level of personalization. Personalization, naturally, doesn’t scale…at least it didn’t before LLMs were introduced.
Now, a hacker could:
ChatGPT (and other LLMs) can also be used to launch multi-lingual campaigns. This enables hackers to increase their reach, without any additional skills or expertise.
NOTE: While ChatGPT is programmed to "avoid engaging in activities that may harm individuals or cause harm to the public", there are shortcuts. A writer from CNET tried it as a test and was both surprised and worried by how well it worked.
3. Again leverage ChatGPT to automatically generate and send personalized responses to targets. There are dozens of articles out there outlining how to make the most of LLMs’ email reply writing capabilities. It’s easy for hackers to adapt these prompts to serve their nefarious purposes.
Once they’ve identified “qualified” targets, they can execute the final step of the attack. In this case, sending a malicious link.
While – yes – AI introduces new risks and vulnerabilities, it also represents an incredible tool to help enhance protection against a range of threats. In fact, organizations need AI to help them detect threats, especially as they continue to increase in sophistication and volume.
Manual and playbook-based solutions just won’t work, especially given ongoing talent shortages and resource constraints. Teams must fight fire with fire.
In the last 12 months, legacy cybersecurity solutions have introduced new, AI-powered features and co-pilots, and a slew of new solutions (like Qevlar AI) have been introduced built around AI from the ground-up.
Here are some of the top use cases, according to Hackernoon:
Importantly, AI doesn’t replace the humans in the loop. It just enhances our capabilities.
Let’s imagine the social engineering attack we outlined in the previous section was real, and it was picked up by an organization’s SIEM tool. Without AI, a SOC analyst would have to manually investigate the alert, determine whether or not it’s a genuine threat, and identify the best remedial actions.
While, in isolation, this might seem like an easy enough task, we have to remember that on average, SOC teams receive 4,484 alerts every day…
That’s where a tool like Qevlar AI comes in:
Reports (see below) make it easy for analysts to review and validate end-to-end investigations, and dig deeper into all of the sources and insights that influence the conclusive result that the alert is, in fact, a TRUE POSITIVE.
Here’s a full list of the actions that were taken in this hypothetical investigation. Remember, each action is chosen 100% independently, based on the findings from the previous action:
Based on the investigation, Qevlar AI also suggests personalized next steps. In this case, analysts were prompted to:
Qevlar AI acts as an invaluable extension of your SOC team, leveraging the power of LLMs to process large and variable security data streams to perform autonomous and detailed investigations. Our advanced AI models are trained on proprietary and public data, and are fine-tuned and re-trained for continuous improvement.
Want to see the platform in action? Book a demo now.