Docker Your Command & Controll (C2)

Package and ship your CobaltStrike & Empire Instances with Docker.

Red Team operators stride on the fact that we can outpace our opponent, and many times outplay them. This is often the case, and within the team, I work on, automation has been crucial for large-scale growth and ability to go toe-to-toe with an experienced security operations center (SOC). I have found this to be true in many cases through experience within military operations from my past life. But be warned that just because your "First to market" does not mean you will win the market. It does have challenges associated, but in many cases, it's better to have a well planned out and exact product. This assumption is why I have had a significant kick to design and deploy C2 more effectively within our team, and figured I would share some of my thoughts.


I'm going to going to assume you have very little if not any docker experience so I figured we could cover some basics and simple technical jargon (Skip this section if not). Docker is a containerization platform that allows developers and operational teams to build and deploy software with ease on nearly any compatible platform that docker supports. As you dig deeper there are a few main points you need to know:

  • Dockerfile - Docker files are how docker images are scripted and often built in an automated fashion
  • Docker Image - The image is the byproduct of a docker build file, imagine an Ubuntu 16.04 image.
  • Docker Container - This is the layers that makeup, in this case, an Image. This is where the real power comes from Docker, imagine you wanted to build out 5x Ubuntu images with minor ENV changes in each. You're first to build it is from scratch, but after that, its only a simple layer change and you have your new "Image." This is because Images are merely a snapshot of an image.
  • Docker Volume - Like a mounted drive, simply a storage Container which allows persitence of data.

Docker and C2

One of the major benefits of docker is the container layer system; this allows multiple Images all using separate volumes with separate Containers. This has a few distinct advantages:

  1. Size - Reduces the separate environments or images needed to deploy
  2. Costs - Reduce the amount of VPS infrastructure needed
  3. Resources - Often we barely even near max our resources, using this method we can run 5 Empire servers on the $5 Digital Ocean instance.
  4. Centralized Control - I HATE maintaining multiple instances, with this and port-to-port Docker networking we can maintain our Long Haul, Short Term, and Backup C2 all on one server.
  5. Security - This prevents Container-to-Container communication. Docker containers are very similar to LXC containers, and they have similar security features (They operate using Kernel namespaces).
  6. Data integration - By using mounted Volumes, you only store C2 data in one area and can separate them. If a breach does occur they just get a subset of the data.
  7. Management - One of the major problems of installing one-off tools is the support and dependencies don't always play well, after all, it is hacking. Docker allows for the host OS to be a base image while you Base Image handles all the dependencies.

Empire Docker

Empire officially now supports using Docker as the platform. This is currently in the dev branch, but the image is currently live on Docker Hub: The analysis of the Dockerfile is a bit out of the scope of this blog post, but I plan on covering the automation aspect in the future. If you want to dive into the code the Docker file can be found here: Dockerfile, this runs in conjunction with the and scripts.

To get started you can pull down Empire latest release by the pull command docker pull empireproject/empire. To run Empire you simply ocker run -ti empireproject/empire. This will run Empire and allow you to pass command line arguments following the docker command quickly.

alexanders-MacBook-Pro:~ alexanderrymdeko-harvey$ docker run -ti empireproject/empire 
[*] Fresh start in docker, running for you

[*] Database setup completed!

[] Empire reset complete returning back to Docker
] Loading stagers from: /opt/Empire//lib/stagers/
[] Loading modules from: /opt/Empire//lib/modules/
] Loading listeners from: /opt/Empire//lib/listeners/

    ````:sdyyydy`         ```:mNNNNM
   ````-hmmdhdmm:`      ``.+hNNNNNNM

            Welcome to the Empire

You will notice that once you startup Empire it will run to create the DB and the Secret keys. This is a security feature and is normal operations, to prevent the Empire team from publishing the same key for all Docker Image pulls.

One of the aspects of Docker power usage is the --entrypoint; this allows you to override the current ENTRYPOINT that's built into the image. You will need to drop into the container, as the typical operation is to run Empire. To drop into the container, you can replace the ENTRYPOINT:

alexanders-MacBook-Pro:~ alexanderrymdeko-harvey$ docker run -ti --entrypoint bash empireproject/empire 
[email protected]:/opt/Empire# ls
Dockerfile  LICENSE  VERSION  changelog  data  empire  lib  plugins  setup
[email protected]:/opt/Empire# 

In a production environment this won't fly and a major downfall is storage, everytime you run and restart that image you will be reverted to the base unless you commit to it. To maintain storage persistence we can use, Docker Volumes, which will allow you to mount them to specific locations within the Image. In this case, we only have to mount to the Empire data directory to maintain our DB, and certs: docker create -v /opt/Empire --name data empireproject/empire. We can finally mount this to our install directory using the --volumes-from flag which can pass the data volume too docker run -ti --volumes-from data empireproject/empire. All together this comes out to look like:

alexanders-MacBook-Pro:~ alexanderrymdeko-harvey$ docker create -v /opt/Empire --name data empireproject/empire
alexanders-MacBook-Pro:~ alexanderrymdeko-harvey$ docker volume ls
local               cbb254a5d09b2c0ee828509a67dab0697bdbe5f901a71aa24a565433d6f4a854
alexanders-MacBook-Pro:~ alexanderrymdeko-harvey$ docker run -ti --volumes-from data empireproject/empire
[*] Loading stagers from: /opt/Empire//lib/stagers/
[*] Loading modules from: /opt/Empire//lib/modules/
[*] Loading listeners from: /opt/Empire//lib/listeners/

Finally, we will need to Expose our Docker Container to the host networking stack, we can do this few different ways, but I found an efficient method of doing this using publish function. This allows you to "Publish a container's port(s) to the host." We do this with the following schema <HOST-IP>:<HOST-PORT-TO-EXPOSE>:<GUEST-PORT-TO-EXPOSE>. This will allow you bind and proxy traffic directly to your Docker Container, even if you used diffrent external ports!

alexanders-MacBook-Pro:~ alexanderrymdeko-harvey$ docker run -ti --volumes-from data -p empireproject/empire


[Empire] Post-Exploitation Framework

[Version] 2.3 | [Web]

_______ .___ . .___ __ .______ _______
| || / | | _ \ | | | _ \ | |
| |
| \ / | | |
) | | | | |
) | | |

| | | |/| | | / | | | / | |
| |
| | | | | | | | | |\ ----.| |

_||| || | | || | | `.___|||

   282 modules currently loaded

   0 listeners currently active

   0 agents currently active

(Empire) > listeners
[!] No listeners currently active
(Empire: listeners) > uselistener http
(Empire: listeners/http) > info

Name: HTTP[S]

Category: client_server


Starts a http[s] listener (PowerShell or Python) that uses a
GET/POST approach.

HTTP[S] Options:

Name Required Value Description

SlackToken False Your SlackBot API token to communicate with your Slack instance.
ProxyCreds False default Proxy credentials ([domain]username:password) to use for request (default, none, or other).
KillDate False Date for the listener to exit (MM/dd/yyyy).
Name True http Name for the listener.
Launcher True powershell -noP -sta -w 1 -enc Launcher string.
DefaultDelay True 5 Agent delay/reach back interval (in seconds).
DefaultLostLimit True 60 Number of missed checkins before exiting
WorkingHours False Hours for the agent to operate (09:00-17:00).
SlackChannel False #general The Slack channel or DM that notifications will be sent to.
DefaultProfile True /admin/get.php,/news.php,/login/ Default communication profile for the agent.
process.php|Mozilla/5.0 (Windows
NT 6.1; WOW64; Trident/7.0;
rv:11.0) like Gecko
Host True Hostname/IP for staging.
CertPath False Certificate path for https listeners.
DefaultJitter True 0.0 Jitter in agent reachback interval (0.0-1.0).
Proxy False default Proxy to use for request (default, none, or other).
UserAgent False default User-agent string to use for the staging request (default, none, or other).
StagingKey True G:IfjvH;Z#J|]FSs9XU~},D{[)8yuR2n Staging key for initial agent negotiation.
BindIP True The IP to bind to on the control server.
Port True 80 Port for the listener.
ServerVersion True Microsoft-IIS/7.5 Server header for the control server.
StagerURI False URI for the stager. Must use /download/. Example: /download/stager.php

(Empire: listeners/http) > execute
[*] Starting listener ‘http’
[+] Listener successfully started!
(Empire: listeners/http) >

That's pretty much everything you will need to know to get up and running, there is a bit more advanced functionality of Docker we will cover within the CobaltStrike section.

CobaltStrike Docker

Its no secret Red Teams use CS (CobaltStrike) as their main agent/implant. So I decided to make a purely Dockerfile based build. As I'm not going to host CS on Docker Hub as the licensing / Trial schema has changed recently. But thankfully its very easy to build a Docker Image locally. I have posted the Dockerfile here: Dockerfile. This does a few keys things, but the most critical aspect is the ability pass the CS License key to the Dockerfile at build time.

The first step is to clone the Dockerfile.

git clone

Next, we go ahead and build the image. Replace cskey with the license key of choice; this will all the download of the proper tar ball. Now Sit back and relax :)

docker build --build-arg cskey="xxxx-xxxx-xxxx-xxxx" -t cobaltstrike\cs .

As before I placed an ENTRYPOINT of teamserver, as you will see you can pass the required options following the required docker string:

docker run -d -p --name "war_games"  cobaltstrike\cs password

We added two new concepts this round:

  1. -d - Dameon, allows you to start the container headless.
  2. --name - Allows you to name the container for further usage once started.

Once we started the container we can monitor the CS instance by tailing the logs docker logs -f "war_games"

alexanders-MacBook-Pro:Dockerfiles alexanderrymdeko-harvey$ docker logs -f "war_games"
[*] Generating X509 certificate and keystore (for SSL)
[+] Team server is up on 50050
[*] SHA256 hash of SSL cert is: 2013748909fd61ff687711688e5dc4306d0fb1c3afa8ece4f30630c31ba1557c

To gain access to the instance, you can use exec as such:

alexanders-MacBook-Pro: Dockerfiles alexanderrymdeko-harvey$ docker exec -ti war_games bash
[email protected]:/opt/cobaltstrike# 

Finally to kill the container you simply kill it:

docker kill war_games

CS Volume

Don't forget to create a container for the CS instance! Otherwise, lose all your data

docker create -v /opt/cobaltstrike --name cs-data cobaltstrike\cs

Then start your instance with the volume:

docker run -d --volumes-from cs-data -p --name "war_games"  cobaltstrike\cs password

Article Link: