INTEGRATIVE INSIGHTS ON EMERGING OPPORTUNITIES |
Integrative research means our extensive company research informs every thesis and perspective. The result is deep industry knowledge, expertise, and trend insights that yield valuable results for our partners and clients.
- We reflect on both the frustrations and successes we’ve heard about AI in cybersecurity.
- The biggest hope for AI in cybersecurity is that it can prevent attacks, detecting and potentially blocking novel attacks before they can cause damage; however, such solutions have not been the silver bullet they were hoped to be.
- In the near-term, we believe it will be challenging to implement AI to detect zero-day and novel threats due to the unpredictable nature of these attacks and the difficulty AI has in distinguishing harmless anomalies from true threats. Efforts to address these shortcomings with information transparency also face challenges.
- We’re seeing the most success among solutions that use AI to improve cybersecurity teams’ ability to interact with traditional cybersecurity approaches. By bridging the gap between technical complexity and human understanding, these AI solutions streamline security operations centers, enabling teams to be more efficient and effective.
TABLE OF CONTENTS
Frustrations along with some successes
Detecting and blocking novel attacks – the challenge of false alerts
Potential for transparent AI detection solutions
The bottom line for AI detection: Hybrid approaches first
AI success in cybersecurity: Enhancing traditional cybersecurity solutions
AI can be a key partner in cybersecurity efforts
Cybersecurity index: Volatile summer, but gains since September
Cybersecurity M&A: Notable transactions include SecureWorks, Dazz, and Fend
Cybersecurity private placements: Notable transactions include Armis and Upwind
Frustrations along with some successes
AI has had a dramatic effect on the cybersecurity industry in the past year. In this report, we reflect on both the frustrations and successes we’ve heard about in the market. In terms of frustrations, the biggest hope for AI in cybersecurity is that it can prevent attacks, detecting and potentially blocking novel attacks before they can cause damage; however, such solutions have not been the silver bullet they were hoped to be. Among the successes are more mundane AI capabilities – such as using large language models (LLMs) to query data and enhance explanations. These solutions have received less hype compared to AI detection capabilities, but they are the most impactful use of AI we’ve seen to date. We believe they have the potential to transform how security operations operate in relatively short order.
Detecting and blocking novel attacks – the challenge of false alerts
One of the most challenging aspects of defending organizations against cyberattacks is identifying and stopping attacks quickly to prevent or minimize damage. This is difficult for known vulnerabilities and attack vectors; it is even more difficult for zero-day vulnerabilities and novel attack vectors, which, by definition, have not been seen before so cannot be identified or stopped with widely available signatures and rule updates. But with AI, organizations are successfully detecting and blocking even novel attacks because AI excels at quickly identifying anomalous behavior and data traffic patterns and other suspicious activity and conditions. We have heard of numerous examples of zero-day threats found and mitigated with AI.
However, there are two related drawbacks. The first is false positives. AI detects threats that legacy methods would have missed, but it also perceives many harmless activities and patterns as threats. False positives are not a new problem in cybersecurity solutions. And some AI enthusiasts contend false positive alerts (alerting cybersecurity personnel to harmless actions) is valuable because unusual activity is noteworthy, regardless of whether the cause is malicious. However, our conversations indicate this a minority view.