|My copy of "Forensic Discovery"
considering getting into the field. As such, I thought it might be useful to share my view of the core, foundational principles of DFIR, those basic principles I return to again and again during investigations, as well as over the course of time. For me, these principles were developed initially through a process of self-education, reading all I could from those who really stood out in in the industry. For example, consider the figure to the right...this is what pages 4 and 5 of my copy of Forensic Discovery by Farmer and Venema look like. The rest of the pages aren't much different. I also have a copy of Eoghan Casey's Handbook of Digital Forensics and Investigations, which is in similar "condition", as are several other books, including my own.
The thing we have to remember about core principles is that they don't change over time; Forensic Discovery was published in 2005, and Casey's Handbook, 5 yrs later. But those principles haven't changed just because the Windows operating system has evolved, or new devices have been created. In fact, if you look at the index for Farmer and Venema's book, the word "Windows" never appears. My last book was published in 2018, and the first image covered in the book was Windows XP; however, neither of those facts invalidate the value of the book, as it addresses and presents the analytic process, which, at it's root, doesn't significantly change.
The principles I'm going to share here do not replace those items discussed through other media; not at all. In fact, these principles depend on and expand those topics presented in other books.
The first thing you have to understand about computer systems is that nothing happens on a computer system without something happening; that is, everything is the result of some action.
I know this sounds rudimentary, and I apologize if it sounds overly simplified, but over the course of my career (spanning more than 2 decades at this point) in various roles in DFIR, one of the biggest obstacles I've encountered when discussing a response with other analysts is that things don't just happen for no reason. Yes, it's entirely possible that any given, random bit on a hard drive may change state due to a fluctuation of some kind, but when it comes to a field in an MFT record (deleted vs in use file) or a Registry value changing state (1 to 0, or reverse), these things do not simply happen by themselves.
Let's say, for example, that a SOC analyst receives an alert that the "UseLogonCredential" value has been set to "1". This is a pretty good detection indicating that something bad has already happened, and that something bad is likely to happen in the very near future, as well. However, this does not just happen...someone needs to access the system (via keyboard or remotely) with the appropriate level of privileges, and then needs to run an application (RegEdit, reg.exe, another program that accesses the appropriate API functions...) in order to make the change.
Locard's Exchange Principle is one of Chris Pogue's favorites, to the point where he discusses it in his courses at OSU! This principle states that when two objects come into contact with each other, material is exchanged between them. This applies to the digital realm, as well; when two computers come into "contact", "material" or data regarding the connection and interaction is exchanged between them. Some of this data may be extremely transient, but due to advancements in computer use functionality, the fossilization of this data begins pretty quickly. That is to say that some of these artifacts are "stored" or logged, and those log entries can exist for varying amounts of time. For example, a record written to the Security Event Log may be overwritten within a few days (or even hours, depending upon the audit configuration and activity on the endpoint), but records written to other Windows Event Logs may exist for years without the risk of being overwritten. Evidence of activity may be written to the Registry, where it may exist until explicitly removed.
But the point of this principle is that something, some artifact of activity as a user or threat actor interacts with an endpoint will be created, and may continue to exist for a significant period of time.
This brings us to the third principle, direct vs indirect artifacts. This is something of a reiteration of section 1.7 (Archeology vs Geology) of Farmer & Venema's book; table 1.3 at the bottom of pg 13 essentially says that same thing. However, this principle needs to be extended to address more modern operating systems and applications; that is, when something happens on an endpoint...when a program is executed, or when a user or threat actor interacts with the endpoint in some way, there are artifacts that are created as a direct result of that interaction. For example, a threat actor my copy a file over to the endpoint, writing it to the file system. Then they may execute that program, redirecting the output to a file, again writing to the file system.
Think of this as a video camera pointed directly at the "scene of the crime", recording direct interactions between the threat actor and the target victim.
There are also "indirect" artifacts, which are those artifacts created as a result of the program or threat actor interacting with the ecosystem or "environment".
A great way to think of indirect artifacts is having video cameras near the scene of a crime, but not pointed directly at the scene itself. There may be a video camera across the street or around the corner, pointed in a different direction, but it captures video of the threat actor arriving in a car, and then leaving several minutes later. You may notice that the back seat of the car seems to be fuller than when it arrived, or the end of the car near the trunk (or "boot") may be lower to the ground, but you do not see exactly which actions occurred that resulted in these apparent changes.
A great thing about both direct and indirect artifacts is "fossilization", something mentioned earlier, and to be honest,
stolen borrowed from Farmer and Venema. Everything that happens on an endpoint is the result of something happening, and in a great many cases, these artifacts are extremely transient. Simply put, depending upon where those artifacts exist in the order of volatility, they may only exist for a very short period of time. In their book, Farmer and Venema discussed "fossilization", specifically in the context of deleted files with *nix-based file systems. Operating systems have grown and evolved since the book was published, and a great deal of usability features have been added to operating systems and applications, significantly extending this fossilization. As such, while direct artifacts of user or threat actor interaction with an endpoint may not persist for long, fossilization may lead to indirect artifacts existing for days, months, or even years.
For example, let's say a threat actor connects to an endpoint; at that point, there is likely some process in memory, which may not exist for long. That process memory will be allocated, used, and then freed for later use, and given how "noisy" Windows systems are, even when apparently idle, that memory may be reused quickly. However, direct artifacts from the connection will very often be logged, depending upon the means and type of access, the audit and logging configuration of the endpoint, etc. If this process results in the threat actor interacting with the endpoint in some way, direct and indirect artifacts will be logged or "fossilized" on the endpoint, and depending upon the configuration, use, and subsequent interaction with the endpoint, those fossilized artifacts may exist for an extended period of time, even years.
Article Link: Windows Incident Response: DFIR Core Principles