This blog post serves as a preview to an Infosecurity Europe tech talk that will be presented on Wednesday, June 5, 2019.
One of the greatest challenges security operations center (SOC) teams face is the high volume of daily alerts about suspicious files and endpoints that they must investigate. A lot has already been written about the “needle in the haystack problem”. SOC analysts are faced with so many alerts that they will likely miss the real threat (needle) hidden among the false positives (the haystack).
According to research published by the Ponemon Institute and reported in a Network World article, only four percent of alerts are truly investigated. It is nearly impossible to analyze every single alert coming through an organization due to time constraints and a lack of resources. As a result, many alerts remain uninvestigated. This is in contrast to the zero-trust approach that assumes every alert might indicate a breach and therefore must be investigated thoroughly.
Sophisticated attackers understand that defenders have many alerts to investigate. Many successful cyber attacks are the result of security teams missing an alert because the alert “haystack” was too large. In addition, advanced attacks may be hidden behind vague alerts such as “suspicious behavior” which are often missed or go ignored. This further supports the notion that no alert should be left uninvestigated.
For that reason, it is important for security teams to embrace a zero-trust approach when investigating alerts and to not compromise on investigating only a handful of alerts. A few solutions have been commonly adopted that are helping security teams address a greater number of alerts, which I will divide into two categories: alert reduction and response automation. When integrated properly, alert reduction and response automation should help defenders reduce the greater “haystack”, in other words lowering the number of alerts that need to be investigated.
Alert reduction means diminishing the number of alerts that need to be investigated. This is done through clustering alerts and correlating events, reducing thousands of logs into hundreds of alerts and including a better interface for the SIEM. This makes the investigation process more convenient by reducing the number of alerts and providing greater context into the alerts. The next step required is to investigate the clustered alerts (which is less than the original amount), obtain more data, analyze and then respond.
Response automation is used to analyze the clustered alerts. Response automation allows security teams to create workflows and automate the majority of the alert handling. For example, once a suspicious file is detected, response automation can be used to automatically check the alert against hash-lookup databases or IOC repositories.
While alert reduction and response automation can be helpful for reducing the workload required for investigating such a large volume of alerts, the challenge remains that in most cases a technical analyst is needed to manually analyze the suspicious alerts. Some tasks, unfortunately, are very difficult to automate. Reverse engineering, for example, is typically not a task that can be performed automatically and at scale. The same applies for memory analysis. A team of malware analysts is required to perform a deeper analysis of alerts in order to better understand and classify them. For example, “is the alert a false positive?”, “if the alert is a threat, what type of threat is it?”, “what is the intent and sophistication level of the threat?“, and in some cases, “who is the threat actor behind the attack?”. Unfortunately, the majority of organizations do not have access to a large, dedicated team of reverse engineers who can provide a deeper analysis on every single suspicious alert.
In an ideal world, we would deeply analyze every single alert.
For example, if there is an alert on a specific endpoint within an organization, we would want to analyze every file in the file system and every single module in memory, in other words executing a deep investigation on the machine as a whole.
Imagine that you have a team consisting of several dozen reverse engineers, who can analyze every file and every memory image of a suspicious endpoint that have been alerted through your security systems. This team of experts will be able to answer the most relevant questions about each and every alert that you encounter:
1. Are we facing a real attack?
2. Which type of attack are we dealing with? Is it adware or an APT, for example, to better understand the risk of the attack.
3. They will tell you the intent of the attacker. For example, by identifying a threat as a ransomware or a banking trojan.
4. They will be able to tell you if the threat is related to an incident that you had previously, which is important for building greater context behind a malware campaign.
5. Which machines are infected and what is the scale of the attack?
6. How do we contain and remediate the threat?
This scenario, where experts are available to analyze every alert, is ideal.
However, transitioning back to reality, how is this scenario achievable? How can we as security professionals automate the skills of experienced reverse engineers into our day-to-day security operations, to automatically and deeply investigate every alert that comes through our SIEMs?
The above graphic is a mapping of code reuse and code similarities across various open source projects. Legitimate projects will consistently reuse code from other projects.
Now let’s take a look at the following graphic:
Every node in the layout represents a malware campaign that is associated with North Korea. There are code similarities between the WannaCry ransomware (2017) and the Sony attack from 2011, and also the Swift Bangladesh attack from 2016.
We see this common phenomenon of code reuse in both trusted and malicious software. It is a very logical progression for developers to reuse code. Reusing code is not inherently malicious. Leveraging code reuse makes the lives of developers more convenient and efficiently bring tools to the market faster.
As defenders how can we utilize the concept of code reuse for our own benefit?
Applying this genetic or code reuse approach to malware analysis means identifying code similarities to known trusted and malicious software, in order to detect threats and reduce false positives. Even if an attacker decides to write most of his or her code from scratch, as long as some portions of the code are reused from previous malware, which we see in almost all cases, defenders will be able to detect any future variant, regardless of evasion techniques that may be deployed. The same principle applies for code that was seen in legitimate, trusted applications, which can be used to identify false positives and in turn reduce the overall “alert haystack”.
While alert reduction and response automation solutions help to cluster and reduce the number of alerts that must be investigated, they still require a deep level of analysis. Organizations can not compromise on investigating only a handful of alerts because they run the risk of incidents slipping through the cracks. SOC and IR teams can improve their efficiencies by leveraging automation technologies in tandem with analysis technologies, in order to investigate every single alert quickly and garner relevant context that will help them to prioritize, tailor and accelerate their response.
The ultimate goal of code similarity analysis (or, “Genetic Malware Analysis”) is to automate the deep analysis and reverse engineering processes, in order to move closer to the ideal world that I described earlier, where organizations can accurately analyze every single alert quickly and automatically, and respond with greater confidence to a higher volume of alerts.
The post A Straw-by-Straw Analysis: The Zero-Trust Approach for your Alert Haystack appeared first on Intezer.