TRITON – A Post-Mortem Analysis of the Latest OT Attack Framework

Executive Summary

  • The TRITON ICS cyberattack exhibited an entirely new level of Stuxnet-like sophistication.
  • The attackers exploited a zero-day in the PLC firmware in order to inject a Remote Access Trojan (RAT) with escalated privileges into the firmware memory region of the controller — without interrupting its normal operation and without being detected.
  • We believe that the purpose of the RAT was to enable persistent access to the controller, even when the physical memory-protection key was turned to RUN mode — which was designed to prevent unauthorized updates to the PLC code — rather than PROGRAM mode.
  • TRITON exposed yet another breed of ICS systems that attackers can now target to compromise industrial operations, the physical safety control systems – or Safety Instrumented Systems (SIS) — that provide automatic emergency shutdown of plant processes, such as an oil refinery process that exceeds safe temperatures or pressures.
  • The likely intent of such an approach would be to disable the safety system in order to lay the groundwork for a 2nd cyberattack that would cause catastrophic damage to the facility itself, potentially causing large-scale environmental damage and loss of human life.
  • Although TRITON was a targeted attack specifically designed to compromise a particular model and firmware revision level of SIS devices manufactured by Schneider Electric, the tradecraft exhibited by the attackers is now available to other adversaries — who can quickly learn from it to design similar malware attacking a broader range of environments and controller types.
  • In this analysis, we examine how the malware works and provide SNORT signatures for detecting it at the end of the blog post.
  • Overview

    After completing our analysis, we suspect the TRITON malware itself is a small part of a larger attack. We found no usage directives in the TRITON code itself. Rather, its primary goal appears to be to give attackers a means to execute remote code on the device that’s been infiltrated, opening the door to future deployment of malicious components. TRITON merely opens the door.

    Here is how we arrived at this conclusion.


    trilog.exe -> main executable py2exe compile that executes python script -> contains all the libraries including tristation communication libraries

    inject.bin -> [Missing File] – probably responsible for placing imain.bin in the right place

    imain.bin -> Main backdoor

    Operationally, the above code indicates the next step is to deliver the initial payload and then confirm its viability to attack the device. Once confirmed it will load the injector and the main backdoor and, finally, cover its tracks.

    TRITON program flow

    Safe Append Program – What it tells us

    Notably, we can see that none of the payloads uploaded to the device actually rewrite the original program, but simply append to it. This is important, because it means the attackers took some effort to ensure the original program continues to run as usual, without interruption. They did not want to harm the device, but rather just inject a backdoor for future use.

    Here is how they accomplished the append:

    First, the malware checks against the following factors (these are just some of the checks, not all of them):

    • Is the physical key in PROGRAM state?
    • Is the current program valid?
    • Is the current program running without exceptions?

    Then, it takes the program table, which means it uploads (pulls) all the Ladder Logic from the device, constructs the linked list of Ladder Logic programs, and checks for validity.

    The first 2 bytes of each program data contain the ID of the next program. The list end is marked by a next ID of 0. The code makes sure the entire list is present and accounted for.

    Now, because the malware knows how many programs there are, it can upload the last program in the list.

    From there, it inspects the last program and checks to see if the program is signed by the attacker. The signature validation looks like this:

    [CRC all program data except last 4 bytes] XOR [Last 4 bytes] == 0x7b5241d3

    Any program that contains this signature validation should be considered to be created by an attacker.

    If the last program is created by the attacker, any newly uploaded program will overwrite it. Otherwise a new program will be appended to the end. The attackers likely used this methodology to prevent multiple appended programs, which makes the attack quieter and decreases the potential for detection.

    In terms of metadata including ‘program name’ and ‘modification timestamp,’ all programs are identical. The attacker probably did this likewise to keep the program more hidden; programs with identical metadata are much harder to detect.

    First Payload

    While much of the information that follows has been documented before, our research team made a few additional discoveries that add to the technical understanding of the attack.

  1. The first code uploaded to the device has a relatively small task. Its sole purpose is to write something in memory and then check to see if it persisted in memory. This is done by the attackers to test their ability to upload and execute code.
  2. The code then scans the memory range from 0x800000 to 0x800100 and looks for the pattern: 40 ?? ?? ?? 60
  3. When the attackers find the pattern, they write a magic number to offset 0x18.
  4. Then the main python script sends the UDP packet GetCPStatus. This returns some data about the project that’s running on the controller, such as project name and run state.
  5. The script looks for a specific field called fstat, usually a status code that seldom gets used by the controller. If the status code is the same as the magic number, the script can now continually upload and execute code on the controller.

How the attackers check that they have read/write access to the controller memory.


Since the script already has the ability to write and execute code, why does it need this new code?

In our view, this is how the attackers ensure they have access to a device even when the device’s Physical Key has been switched to non-programming mode (RUN).

Otherwise there is no reason for them to maintain this backdoor, since it is very easy to inject code to the device even without 0-days or other means of exploitation. This is one of the telltale features of TRITON.

Presence of alternate modules

We believe that another, separate module may be responsible for making regular, periodic attempts to insert the backdoor, or that another component monitors the industrial controller and figures out when it is possible to inject the backdoor. This component may be human activated, where someone physically turns a key and executes the script.

Of course, there is a separate module whose main purpose is to use the backdoor to gain wider access.

Main Payload

While this information has been documented before, we are presenting the backdoor commands and packets in greater detail.

Most of the samples related to Triton were uploaded to VirusTotal, but the one that’s missing is inject.bin. Even though we are missing it, we still can assume what was its functionality.

We assume the main purpose of inject.bin is to write imain.bin into the code of the firmware that handles and parses the network traffic for the command GetMPStatus. The code of the firmware is loaded into the memory, so it can be changed during runtime, of course it won’t persist after a reboot. The purpose of the injection is to make sure the attacker has an active backdoor on the device even if the physical key/switch is turned to non-programming mode.

Backdoor protocol

The code is designed as a backdoor to provide the most basic functionality which is enough to extend it to anything they want, such as disabling it at a specified time.

The GetMPStatus packet looks like this:

[Standard Tricon packet headers][opcode][special identifier][data]

The backdoor uses the special identifier and the opcode to check whether it received the packet, and then starts processing it. If it does not receive the packet, it continues along to the default handler.

How the backdoor protocol works.

To summarize, the backdoor is able to read and write memory from the controller and execute code, so it was designed to give the attacker the ability to write some piece of code later and execute it.

Snort Signatures

Below are Snort signatures to detect communication with the TRITON backdoor (not the infection itself).

alert udp any any -> any 1502 (msg:”TRITON backdoor execute request”; content:”|05 00 10 00 00 00 1d|”; offset:0; depth:7; content:”|00 00|”; offset:8; depth:2; content:”|10 00 f9|”; offset:12; depth:3; dsize:22; sid:1190000; rev:1; reference:url,;)

alert udp any any -> any 1502 (msg:”TRITON backdoor read request”; content:”|05 00 14 00 00 00 1d|”; offset:0; depth:7; content:”|00 00|”; offset:8; depth:2; content:”|14 00 17|”; offset:12; depth:3; dsize:26; sid:1190001; rev:1; reference:url,;)

alert udp any any -> any 1502 (msg:”TRITON backdoor write request”; byte_extract:2,2,payload_length; content:”|05 00|”; offset:0; depth:2; content:”|00 00 1d|”; offset:4; depth:3; content:”|00 00|”; offset:8; depth:2; byte_test:2,=,payload_length,12; content:”|41|”; offset:14; depth:1; sid:1190002; rev:1; reference:url,;)

Parting thoughts

This backdoor seems very straightforward. Why is it such a threat?

While this malware is simple and straightforward, it is far from generic. It is targeted specifically to exploit vulnerabilities found within the ICS environment. Moreover, while different firmware versions have different offsets for their functions, we can see above that the attack used predefined offsets for the execution flow of the handler. The facts point to an attacker who had intimate knowledge about the exact firmware in use, and potentially used other reconnaissance tools prior to the attack to gain that knowledge. (And of course we know that they previously compromised the Windows-based engineering workstation in order to get access to the controller.)

Also, in order to test the backdoor and the functionality of the injection, they must have had the exact hardware and firmware (with the right revision level) available in their physical lab environment to test it.

We’re fortunate that the attack failed because the attackers made the mistake of testing buggy code in a production environment. This caused a fair amount of disruption to the asset owner — by shutting down the production facility — but at least it did not lead to any catastrophic damage or threat to human safety.

As a whole, an operation such as TRITON required extensive preparation, reverse-engineering capabilities and access to several reconnaissance and delivery tools. We can and should expect the demonstrated skills to evolve and show themselves again. However, but making detailed knowledge of the pathways taken publicly available here, it is our hope that ICS security teams can remain one step ahead.

This post was originally published on TRITON – A Post-Mortem Analysis of the Latest OT Attack Framework on CyberX.- CyberX - Field-proven industrial cybersecurity

Article Link: