For truly intelligent cyber security, pair AI with humans

Originally posted in Techerati on July 1, 2019.

It’s the human plus technology combination that can truly tackle today’s spectrum of cyber threats, writes Alex Jinivizian, VP of strategy, eSentire

AI is all the rage, purported as the next big thing due to the dramatic increase in speed, automation and efficiency in projects that implement it. But the practical benefits of AI—significant acceleration of manufacturing supply chain, changing industries such as automotive with self-driving cars, or more controversially autonomous weapons within the defence sector—are still to be seen.

The perfect storm – cyber security and AI

In relation to cyber security, AI has become the buzzword and supposed solution for combatting the rapidly evolving cyber threat landscape. The buzz is accentuated due to perfect storm conditions driven by the displacement of traditional perimeter security by the consumerisation of IT and the expanded IT ecosystem.

With a more distributed IT environment, breaches are increasing as hackers use more sophisticated techniques across a wider attack surface. Additionally, the demand for skilled cyber security resources far outweighs supply, with some independent bodies citing a current skills gap of almost 3 million personnel worldwide and over 350,000 in the European Union alone by 2022.

As IBM’s Security Intelligence underlines, security analysts are overworked, understaffed and overwhelmed. Quite simply, it’s not humanly possible to keep up with the ever-expanding threat landscape, especially the day-to-day tasks of running a Security Operations Centre (SOC).

Level setting the silver bullet

Frankly, it’s not surprising that industry executives are keen for a magic bullet, but there’s evidence that a significant gap exists between perceptions of what AI techniques can solve for and the reality of its current limitations. According to a recent ESET White Paper, 82 percent of a sample of IT decision makers in the US (and 67 percent in the U.K) believe that AI and Machine Learning (ML) are the silver bullet to solving their organisations’ cyber security challenges.

Yet the same base of respondents in the U.S. (65 percent) and the U.K. (53 percent) agreed that discussions around AI and ML are “hype.”

There’s further confusion over terminology, as just 53 percent of IT decision makers said their company fully understands the differences between the terms AI and ML.

Another Reuters study into U.K. media coverage of AI goes wider than cyber security, claiming AI has been “amplified by industry self-interest.” Furthermore, the investor community is showing signs of becoming exasperated by pitches where products in almost every area are being “reinvented” with AI at their core. The consensus is that clarity is required around marketing claims made by security and other “next-generation” vendors.

The hype surrounding AI and ML as the silver bullet to solving cyber security challenges muddles the message for those making key decisions

This is particularly the case as the threat landscape becomes even more complex to navigate and hackers use increasingly sophisticated technical mechanisms in a quest to gain access to company networks. The hype surrounding AI and ML as the silver bullet to solving cyber security challenges muddles the message for those making key decisions on how best to secure their company’s networks and data.

Why people matter

It’s important to distinguish categories which can be misled by all-encompassing AI terminology and differences and nuances that include AI itself, data science, machine learning, and deep learning. It’s a complicated area with practitioner origins generally from academia and research institutions. For this reason alone, it’s not surprising that terms are often used interchangeably and subject to misinterpretation.

As regards cyber security, machines far exceed humans at performing certain tasks, such as log collection, monitoring and microscopic alert comparisons. However, the biggest current limitation to AI is that algorithms are only as good as the humans that designed them. And AI should not be considered as an adequate replacement for human surveillance, at least not in the immediate future.

As Google’s recent head of cloud AI business recently stated, “AI is about using math to make machines make really good decisions. At the moment it has nothing to do with simulating real human intelligence.” Some businesses, too, are challenging the hype and seeing the equivalent of alert fatigue with AI system notifications.

Human expertise at machine scale

In 2018 alone, eSentire’s SOC dealt with over 2 billion raw signals across more than 650 customers worldwide that bypassed traditional security controls. Recently, our security analysts relied on automation of runbooks and advanced threat analytics (powered by ML) to detect an adversary leveraging an unknown exploit in Kaseya’s Virtual System Administrator (VSA) product to deploy crypto miners across the infrastructure of a small number of eSentire customers.

ML techniques support security analysts’ ability to absorb the complexity in order to eliminate false positives and identify real and sophisticated attacks. An ML-enabled system also enables threat containment and prevention and enables SOC analysts to respond to customers that requires immediate action and client intervention.

AI and ML techniques continue to evolve and will become an increasingly critical part of any software or product-based solution aimed at threat prevention. These will increasingly empower SOC analysts with the most relevant tools and information and the fundamental human element required to detect and respond to threats and liaise with customers to ensure their networks are never compromised. It’s the human plus technology combination that can truly deal with the full spectrum of threats within the evolving security landscape.

Article Link: eSentire | For truly intelligent cyber security, pair AI with humans