Getting Critical Answers

 

LimaCharlie is a platform designed for building security solutions. Endpoint detection and response (EDR) capability is the cornerstone of the platform, which also provides access to a plethora of raw telemetry. The EDR capability is powerful but the bigger prize are the “critical answers” we can gain my making use of the telemetry.

These critical answers are the specific pieces of information that will allow you make important security decisions, and you need the ability to get to these answers in the most straight forward way possible.

LimaCharlie Insight provides you with one year of built-in retention and search capability. The retention allows you to get answers like “what EXACTLY happened on this host 7 months ago at 3AM” in a few seconds.

The indicator (domain name, hash, file name, IP address etc) search allows you to get a succinct set of answers

Is this indicator common?

Imagine that you are investigating a possible intrusion when you spot a suspicious looking executable that you don’t recall ever seeing before. Is it a case of bad memory or is this part of the malware dropped by the attacker?

You can start to paint a picture by examining how prevalent the given indicator is. Using the web interface (or API) you can ask this question and get an answer immediately.

Screenshot 2018-11-27 at 06.08.15.png

This tells you how many hosts have seen this file today, this week or this month. An indicator seen for the first time today and not in the past month is highly suspicious.

We make getting these 3 numbers easy. The data can be pulled either through a search or while visualizing activity from a host.

Where has this indicator been seen?

Now imagine that you have been made aware of a specific indicator related to malicious activity. This can come up in many ways: A report is published on the web detailing a new threat actor, a law enforcement tip-off, MISP or as a result of an internal security investigation.

Given this information you would need to scope the possible threat right away: “where has this been seen”?

Screenshot 2018-11-27 at 06.18.36.png

Through the user interface you can see the list of all hosts that have ever made a request to this specific domain name over the last year, the first and last time they did and get shortcut links to the fully detailed exploration view of the specific activity. This is powerful.

More Critical Answers?

Do you have other ideas of critical answers you would like to see? Let us know, LimaCharlie is quickly becoming a core part of the security tool set and it is thanks to your feedback!

 

LimaCharlie Like a Pro

 

The following are some best practices for using LimaCharlie. These will help you get started on the right foot and make your life easier.

If you're not familiar with the LimaCharlie Command Line Interface, a short introduction is available here.

Setting up the environment

The first thing you need to do is install the LimaCharlie CLI:

pip install limacharlie

Now create an API key with the following privileges:

dr.del, dr.list, dr.set, ikey.del, ikey.list, ikey.set, org.get, output.del, output.list, output.set, sensor.get, sensor.list, sensor.tag
privileges.png

These privileges allow you to manage your organization but not interact with sensors or query historical data (those privileges are not needed for this example).

From your terminal, login to LimaCharlie:

python -m limacharlie login
# When prompted enter your Organization ID and API Key.

Everything should be ready. You can test it by fetching your configurations:

python -m limacharlie.Sync fetch

This will write a LCConf file in your current directory. If your organization is already configured, this file will contain all your Outputs and D&R rules.

Managing configurations

The LCConf file we got from setting our environment is important. It will allow you to keep all configurations as config files. This is called Infrastructure as Code. These files are best kept under revision control. The advantage of managing your LimaCharlie deployment using these files is that it removes a lot of the human factor (who hasn't forgotten to check a specific checkbox somewhere). It will save you time and headaches, and enable you to build a robust infrastructure.

Now, our initial fetch produced a single file, but it's unlikely you will want to keep it that way. It's much easier to maintain your configurations as multiple files where each file takes care of a specific concern.

For example, you might keep a copy of all auditing messages LimaCharlie produces somewhere for compliance. If you do, and you have multiple LimaCharlie organizations you manage, it will be easier to keep this auditing Output in its own file and to re-use it for all organizations.

This is where the include: some-config-file.yaml statement comes in. It allows you to have a top level config file, let's call it "customer-A.yaml" which includes the more generic components like the "auditing-output.yaml" mentioned above:

include: auditing-output.yaml

For simplicity, we will assume you're using one configuration file and will leave it to you to split them according to your needs.

You can generate a set of configuration files that is a good general boilerplate setup for your organization. They will already contain some of the recommended setup described below:

# Create a directory for your configurations.
mkdir myorg
# Generate the default configs.
python -m limacharlie init ./myorg

The fun part

Now for the fun part, let's setup some functionality.

Tags are simple. yet powerful. They will give you a uniform mechanism to apply and remove behavior, with the added advantage of always having the tags displayed in the LimaCharlie data (so you always know the context around a host from its events).

Host isolation

Host isolation is extremely powerful, but it can be difficult to keep track of which hosts are isolated and why. What we can do here is setup a tag, named "isolated", to apply to a sensor in order to isolate it. When the tag is removed so is isolation.

This approach makes it easy to see which hosts are isolated.

To do this, we will setup two Detection & Response rules (our swiss army knife).

isolate-network:

# Detection
# =========================
op: and
rules:
  - op: is tagged
    tag: isolated
    event: CONNECTED
  - op: is
    path: event/IS_SEGREGATED
    value: 0

# Response
# =========================
- action: task
  command: segregate_network

This rule says: if a CONNECTED event comes in from a sensor that is tagged with the "isolated" tag, and the "event/IS_SEGREGATED" value is false (0), it means someone wants the sensor to be isolated (the tag), but the sensor is not currently isolated (the value in the CONNECTED event). So the action to take is to sent the "segregate_network" command.

Now we will want another rule to do the inverse:

rejoin-network:

# Detection
# =========================
op: and
rules:
  - op: is tagged
    tag: isolated
    not: true
    event: CONNECTED
  - op: is
    path: event/IS_SEGREGATED
    value: 1

# Response
# =========================
- action: task
  command: rejoin_network

This says: if a sensor comes in indicating it is isolated, but it is NOT tagged with "isolated", make it rejoin the network.

From this point on, this will allow you to control host isolation entirely through the use of the "isolated" tag. These rules only fire when a sensor connects, so you might also want to fire a "segregate_network" and "rejoin_network" command at the same time as tagging and untagging if you want the changes to occur imemdiately.

File integrity management

FIM tends to be platform specific. The monitored files/registries on Windows are not the same as MacOS. So we'll use two rules to setup the various monitored files and directories. You can expand the method described here to be more granular. An example of higher granularity would be monitoring specific files on Windows Domain Controlers by using tags associated with these hosts.

windows-fim:

# Detection
# =========================
op: and
rules:
  - op: is windows
    event: CONNECTED

# Response
# =========================
- action: task
  command:
    - fim_add
    - --pattern
    - "C:\\\\*\\\\Programs\\\\Startup\\\\*"
    - --pattern
    - "\\\\REGISTRY\\\\*\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Run*"

mac-fim:

# Detection
# =========================
op: and
rules:
  - op: is mac
    event: CONNECTED

# Response
# =========================
- action: task
  command:
    - fim_add
    - --pattern
    - /Users/*/.ssh/authorized_keys
    - --pattern
    - /Users/*/Library/Services/*
    - --pattern
    - /System/Library/Services/*
    - --pattern
    - /System/Library/Extensions/*

High performance

LimaCharlie is generally extremely performant but there are some edge cases where performance suffers. For these rare situation, encoutered when deployed on high-io database servers, there is the high performance mode.

To simplify the management of applying this mode to the right sensors, we will use a "high-perf" tag:

# Detection
# =========================
op: and
rules:
  - op: is tagged
    tag: high-perf
    event: CONNECTED

# Response
# =========================
- action: task
  command: set_performance_mode --is-enabled

Final thoughts

There is obviously a lot more that can go into your base configurations. I'd like to leave you on some possible ways you could expand your tagging rules that would result in better situational awareness of your network.

Tagging by department

You can create a rule that tags sensors based on seeing a USER_OBSERVED event and doing a lookup against a list of users exported from an Active Directory you upload to LimaCharlie as a resource. For example, this could allow you to know at a glance that a specific asset belongs to the Finance department.

Tagging by role

Most assets in the company can be assigned one or multiple roles easily by using processes observed. For example, a host running "devenv.exe" (Microsoft Visual Studio) is likely a developer, white one running "nginx" is likely a web server.

Creating tagging rules based on seeing these processes can be a quick and easy way to further enhance your awareness of your network.

Tagging for geo-location

It can be useful for a security analyst to see that a specific host has recently traveled overseas. Using the GeoLocation API, you can create a rule that, for example adds a "EU" or "USA" tag when the asset connects from these locations.

 

Development Update

 

The team at LimaCharlie has been busy building out our web application leveraging the capabilities of the publicly available API. There have been so many improvements in such a short period of time we felt that it deserved a blog post.

The two main areas we have been working on are:

  1. Live View: a console for interacting with agents in real-time

  2. Insight: an interface which allows users to search and interact with up to a year's worth of stored endpoint telemetry.

Live View

The live view of the LimaCharlie web application allows privileged users to interact with the endpoints in real-time. We achieve true real-time connectivity through the use of a semi-persistent TLS connection.

Through the live view console you gain the following capability.

1. Get general information about the endpoint such as hostname, platform, relevant IP addresses, last connection time and review tags that have been applied to the endpoint. Along with gathering this information you can also add and remove tags easily directly from the view.

screen1.png

2. Adjust the telemetry that is sent to the output stream. Choose from amongst 70 different data points which can be monitored on any given endpoint.

screen2.png

3. An interactive console from which users can send up to 32 different commands to the console. From this console you can gather data, kill processes, isolate the host from the network and much more. Details on the commands that can be issued can be found here.

Endpoint Command

4. Monitor a real-time stream of events being produced by the endpoint. All telemetry being sent to the output stream can be monitored in the browser as it happens.

screen4.png

5. List all of the processes as they are currently on the given endpoint. From here a privileged user can view process modules, inspect memory strings or maps and kill any process at the click of a button. From this view you are also able to run a hash against VirusTotal’s publicly available API to check for known malware.

Endpoint File System

6. Navigate the file system on the endpoint. From here you can go through directories, hash and download files with one click.

File System Browser

Historical Insight

A couple of weeks ago we announced the introduction of long term telemetry storage with search capability. LimaCharlie still operates as elastic middleware but we are now able to offer one year of storage and search capability at the low cost of $0.50 per agent per month. This move allows MSSP's and SOC's who do not already have their own EDR infrastructure to gain a completely functional information security centre upon signing up (did we mention it is self-serve and there are no contracts?).

Once enabled, LimaCharlie Insight will automatically send all telemetry data to secure storage on the Google Cloud Platform.

The Insight user interface allows you to select a date and time from which to start your investigation. From this starting point new data is loaded into the browser via an infinite scroll mechanism. A histogram displays the time period for which data is available in cold storage alongside what has been loaded in the browser.

Historical Data View

The interface itself provides a simple text filter to limit results based on strings in data fields. This view also provides a cascading text filter and simple query language so that you can create complex filters based on event type.

That wraps up our development update. We are going to continue to the build the tools that we want as information security professionals and deliver them in a way that is fair and transparent. If you want to stay up to date with our progress you can follow us on Twitter or LinkedIn.

 

Introducing Historical Insight: Storage and Investigative Tools

 

We continue on our journey making endpoint capability more accessible. Along with the powerful elastic detection and response engine, LimaCharlie now offers low-cost, long-term data storage and tools for investigation.

Storage and historical insight can be enabled at the click of a button. The cost for a year of storage is a simple $0.50 per sensor per month.

Many of our clients will still continue to use the LimaCharlie endpoint and detection capability programmatically with their own storage solutions, but for many of the MSSPs we have spoken with, an easy storage and investigation tool makes a lot of sense.

Insight: EDR Telemetry Storage and Search

The web interface for the historical insight tool allows the user to pick a time that they wish to investigate and loads all events around it. Events are presented as line items in the lower portion of the UI and can be navigated via an infinite-scroll mechanism. Clicking on a line item will load a graph representing the event process that spawned it and any children it produced. Right-clicking on the root of this graph will present an option to navigate up the graph and load the parent should any exist.

We are very proud of the technological progress we have made and feel extremely grateful for the tight feedback cycles we have established with our customers. It is from what we have learned through these relationships that we set this next course in the evolution of LimaCharlie.

 

Live Endpoint Visibility and Interaction

 

It is now possible to interact with an organization’s endpoints in real-time by utilizing the LimaCharlie live-view interface. In the list of endpoints accessible through the web application you can select to open the live-view for any agent reporting as online.

Through the live-view you can accomplish the following:

  • Get general information about the sensor.

  • Apply and remove tags.

  • Select which events get sent to the cloud. There are a total of 52 events to choose from. Documentation can be found here.

  • Send commands directly to the sensor. This include isolating it from the network which maintaining a command and control connection.

  • View a live-stream of events as they are taking place on the endpoint.

  • Retrieve a list of processes, drill down into the details and check file hashes against VirusTotal’s public API.

liveView.png

To stay up to date with feature development please be sure to follow us on Twitter and/or LinkedIn.

 

The Insider - Case Study

 

An MSSP running on LimaCharlie.io recently had an incident which they shared with us, and in turn we are sharing it with you.

It is a great example of the importance of having endpoint data and how a good EDR can shorten the Incident Response process by orders of magnitude compared to unidirectional logging (like syslog or Windows Event Logs).

Infrastructure

Security products were deployed on the customer's network: an Anti-Virus, a SIEM and LimaCharlie. LimaCharlie was configured with all events going to an AWS S3 bucket for long term retention and to a Splunk host via SFTP for daily operations.

The customer's network consisted of an Intranet which included workstations, a DNS server and a Wordpress server. The Intranet / Internet boundary had a firewall and was NATing internal IPs.

Workstations were Windows (7+), MacOS and Linux Debian, all (including the Wordpress server) running LC.

An external server maintained by a 3rd party could by accessed by SFTP and hosted assets were used by the internal Wordpress server.

All workstations on the Intranet had a page from the Wordpress server as their default web page (it was an internal company portal).

normal.png

Initial Event

This MSSP initially began an investigation because their customer received complaints from workers that the performance of their workstations changes abruptly and started running slowly. No new deployments had occured and no alerts were raised by the Anti-Virus product or the SIEM.

Only hosts on the Intranet seemed affected, the fleet of laptops roaming in other networks (coffee shops etc) had no performance issues.

Investigation

MSSP analysts began looking at events from a few of the affected machines before and after the performance issue was reported:

Splunk Query: spath "routing.event_type" | search "routing.event_type"=DNS_REQUEST

Results:

event:    {    [-]    
         DNS_TYPE:     1    
         DOMAIN_NAME:     <hidden>    
         IP_ADDRESS:     <hidden>        
         MESSAGE_ID:     4488    
         PROCESS_ID:     0    
      }
routing:    {    [-]    
         arch:     2    
         event_id:     8f66b3e3-30ea-4d41-a950-32ad27f31afe    
         event_time:     1537776210013    
         event_type:     DNS_REQUEST    
         ext_ip:     <hidden>    
         hostname:     WORKSTATION-6
         iid:     88dd2804-6c20-4adb-990c-10474ffb3e02    
         int_ip:     <hidden>    
         moduleid:     2    
         oid:     <hidden>    
         parent:     3fd02810c64aaae2dba8db675a53185a    
         plat:     268435456    
         sid:     <hidden>    
         tags:    [    [+]    
        ]    
         this:     392d199861b8ddb0a9ffe892866fa821    
      }    
}

What they found was that the affected hosts began a pattern of DNS requests to the external SFTP server as well as several domains triggering an alert in LimaCharlie on the coinblocker feed.

The DNS activity was followed by connections from the browser.

Looking in detail at events around the time of the connection using LimaCharlie's visualization tool, Digger, they found the browser was creating a .js file in the browser cache.

The following Splunk command confirmed that all affected workstations also had the same .js.

Splunk Query: *| spath "routing.event_type" | search "routing.event_type"=NEW_DOCUMENT AND malicious.js.

Thankfully, the files were cached in memory by LimaCharlie. Using the doc_cache_get command in LimaCharlie, the analysts were able to retrieve a cached copy of the javascript file in memory.

As expected (because of the concurrent hits from coinblocker), the javascript turned out to be a Crypto-currency miner.

Interestingly, the LimaCharlie activity did not show any other connectivity to the external SFTP server prior to the fetching of the malicious .js file.

What caused the .js to be fetched? Why was the entire internal network affected at once?

Further analysis of the events showed that the malicious .js was always fetched right after a connection to the internal Wordpress server. This clearly indicated that the Wordpress server may have served as a type of Watering-hole attack.

Suspecting parts of the Wordpress site may have been modified, the MSSP analysts begin looking for the malicious .js file name in the PHP code by retrieving the files using LimaCharlie's file_get commmand. They quickly found that a <script> tag pointing to a file hosted on the external SFTP server has been added to the header of the main portal page. The remote URL event even containsed a malicious.js?id=<?php echo time(); ?> URL parameter, seemingly to make the URL accessed appear more random and legitimate-looking.

This completed the picture of what happenned. The attacker hosted a malicious Cryoto-curency miner on the external SFTP site, then gained access to the Intranet Wordpress server and injected the remote inclusion of the malicious javascript, which looks like this:

altered.png

Who?

The next logical question any Incident Responder has is "how did they get in?".

This is where an interesting twist gets introduced into the plot. The Wordpress server has LimaCharlie installed which means a wide variety of events was collected from around the time the malicious injection occured. Initially expecting to see indications of a Wordpress exploit, the MSSP analysts instead saw a simple command vim ./header.php. The originator of this process was an authenticated user via ssh.

There was no indication of exploitation or leaked credentials of any kind. Since LimaCharlie was installed on the workstations, the access could be corroborated within seconds from the LimaCharlie logs of that user's workstation at the time of infection.

As it turns out an employee had decided to make some money on the side from the company's assets.

From there, the customer's internal security and HR department take the lead...

Lessons

The LimaCharlie EDR helped get a successful outcome for this incident at a few critical points in the investigation.

Manual inspection of the impacted hosts could certainly have been possible without LimaCharlie. But inspecting manually a few hosts would have required administrators/security personel to physically go on location, and that is expensive and incredibly inefficient.

Retrieving the malicious javascript file would have been a potentially complex endeavor. Determining which file is the correct one without seeing the historical timing of the DNS anc connection events would have been difficult. Even if the file was identified, its retrieval is usually complicated by the constant turnover of files on disc. The LimaCharlie file caching dramatically streamlined this process by allowing the analysts to get a copy of the file after deletion, and remotely.

Finally being able to correlate the activity log of multiple hosts, their processes, command lines and network connections to paint a full picture is extremely valuable in interpreting data in order to form a complete understanding of the attack.

Of course, from our point of view, it doesn't make sense to forego all these capabilities when an affordable package like LimaCharlie exists and it literally takes minutes to onboard.

crypto.png
 

Detection & Response Wizard

 

One of our primary goals when we started LimaCharlie was to make endpoint capability accessible to as many people as we could. We just took another small step towards that goal by creating a wizard that lets you create simple detection & response rules with just a few clicks.

 
guiWizard1.png
 

The detection & response wizard allows less technical users, or users new to the platform, to create a wide variety of detection and response rules that can be applied to all endpoints, or subset thereof, at the click of a button.

The wizard enables you to create rules around the following indicators of compromise (IOC).

  • Domain name

  • IP address

  • Hashes (SHA256)

  • Executable path suffix (or simply executable name)

The IOC's can then be selected to target specific operating systems. Your detection can be enabled to only run on any combination of Windows, Mac, Linux.

Finally, once your detection rule has been created you can select what you want to do for a response. There are three responses you can choose from and they are as follows.

  • Kill the process that triggered the detection

  • Isolate the host (you are still able to communicate with the agent but the machine is unable to communicate across the network)

  • Send a report of the incident through the output channels

 
 

The GUI builder for rules does not stop there. Once you have created your detection and response you can switch over to the Advanced tab and edit the YAML directly. By editing the YAML directly you can make complex additions or chain multiple detection and response sequences together.

We think this new detection & response wizard is great and we hope you do as well. We are going to continue to work at making endpoint capability more accessible and welcome any feedback.

 

Detection & Response Case Study: AppleJeus

 

This document is based on the analysis from our friends, the GReAT team at Kaspersky Lab, here: https://securelist.com/operation-applejeus/87553/

We will step you through a sample approach to reading through a report like this and how we turn an interesting read into automated detection using LimaCharlie.io.

The IOCs

First thing we are always trying to do is optimize our ROI. In this case, using the basic IoC provided by the authors takes no time, so let's do that. Usually listed at the very end of the report is the information we are looking for.

File path

  • C:\Recovery\msn.exe
  • C:\Recovery\msndll.log
  • C:\Windows\msn.exe
  • C:\WINDOWS\system32\uploadmgrsvc.dll
  • C:\WINDOWS\system32\uploadmgr.dat

Domains and IPs

  • www.celasllc[.]com/checkupdate.php
  • 196.38.48.121
  • 185.142.236.226
  • 80.82.64.91
  • 185.142.239.173

These looks fairly unique. It is important to at least give it a sanity check as some less detail-oriented reports often list as IoC files and domains that are legitimate and common.

Let's move ahead and turn those into simple D&R rules.

To do this, we break down the files and map them into observables. For example a .exe is likely observable as a NEW_PROCESS, while a .dll is likely going to be observed by default as a CODE_IDENTITY.

The other files however are not likely going to be reported by default since they are neither executable code (like .exe or .dll) nor documents like a .pdf (which would be reported as a NEW_DOCUMENT). We could use them as part of an active spot-check on our organization, but we'll keep that in our back pocket for later.

The IPs and domain should be observable in NEW_TCP4_CONNECTION and NETWORK_SUMMARY, and DNS_REQUEST events respectively.

D&R Rules Theory

Creating D&R rules allow us to detect and report these observables in real-time as they occur on the agents. This has the advantage of accelerating detection and mitigation, but the disadvantage of not being able to see things that are not producing events, like a process that has executed in the past.

There are three main elements to simple rules:

  1. op: is windows will tell the rule to only apply to Windows systems.

  2. events: XXXX will tell the rule it should only apply to certain event types.

  3. path: XXXX, value: XXXX, op: XXXX will describe a simple comparison of the element at path in the event and the specific value compared with op. If op is is then the value is compared for equality for example.


To understand the path listed in the rules, compare them to the JSON event in question. The path is simply the name of each level, starting at the root, that must be followed to the appropriate value. The special character * means zero-or-any-number-of-levels while the ? means one-level-of-any-name.

For example given this event (shortened for demonstration purposes):

{
  "routing": {
    "hostname": "MacBook-Pro.local",
    "event_type": "NEW_PROCESS",
    "event_id": "26bbee42-ac21-475b-b652-ad603f6aa7ad",
    "oid": "c82e5d18-d519-4ef5-a4ac-c454a95d31ca",
    "int_ip": "192.168.1.72",
    "ext_ip": "186.147.235.76",
    "sid": "09530d35-2df9-4fd2-845e-80a0c06efaa3",
    "event_time": 1536260514146
  },
  "event": {
    "USER_ID": 501,
    "PARENT": {
      "USER_ID": 0,
      "COMMAND_LINE": "/sbin/launchd",
      "PROCESS_ID": 1,
      "USER_NAME": "root",
      "FILE_PATH": "/sbin/launchd",
      "PARENT_PROCESS_ID": 0
    },
    "COMMAND_LINE": "/System/Library/Frameworks/QuickLook.framework/Resources/quicklookd.app/Contents/MacOS/quicklookd",
    "PROCESS_ID": 30113,
    "USER_NAME": "some_user",
    "FILE_PATH": "/System/Library/Frameworks/QuickLook.framework/Versions/A/Resources/quicklookd.app/Contents/MacOS/quicklookd",
    "PARENT_PROCESS_ID": 1
  }
}

The path event/?/USER_NAME would result in root.

Resulting D&R Rules for the IOCs

NEW_PROCESS

# If both sub-expression 1 AND 2 match.
op: and
rules:
  # Sub-expression 1
  # ==================
  # The event is coming from a Windows host and is either a
  # NEW_PROCESS or EXISTING_PROCESS.
  - op: is windows
    events:
      - NEW_PROCESS
      - EXISTING_PROCESS
  # Sub-expression 2
  # ==================
  # If either sub-expression 2.1 OR 2.2 match.
  - op: or
    rules:
      # Sub-expression 2.1
      # ==================
      # The file path is ***, case incensitive
      - op: is
        path: event/FILE_PATH
        value: C:\Recovery\msn.exe
        case sensitive: false
      # Sub-expression 2.2
      # ==================
      # The file path is ***, case incensitive
      - op: is
        path: event/FILE_PATH
        value: C:\Windows\msn.exe
        case sensitive: false

CODE_IDENTITY

# If both sub-expression 1 AND 2 match.
op: and
rules:
  # Sub-expression 1
  # ==================
  # The event is coming from a Windows host and is a CODE_IDENTITY.
  - op: is windows
    event: CODE_IDENTITY
  # Sub-expression 2
  # ==================
  # If either sub-expression 2.1 OR 2.2 OR 2.3 match.
  - op: or
    rules:
      # Sub-expression 2.1
      # ==================
      # The file path is ***, case incensitive
      - op: is
        path: event/FILE_PATH
        value: C:\Recovery\msn.exe
        case sensitive: false
      # Sub-expression 2.2
      # ==================
      # The file path is ***, case incensitive
      - op: is
        path: event/FILE_PATH
        value: C:\Windows\msn.exe
        case sensitive: false
      # Sub-expression 2.3
      # ==================
      # The file path is ***, case incensitive
      - op: is
        path: event/FILE_PATH
        value: C:\WINDOWS\system32\uploadmgrsvc.dll
        case sensitive: false

DNS_REQUEST

# If both sub-expression 1 AND 2 match.
op: and
rules:
  # Sub-expression 1
  # ==================
  # The event is coming from a Windows host and is a DNS_REQUEST.
  - op: is windows
    event: DNS_REQUEST
  # Sub-expression 2
  # ==================
  # The domain name is ***.
  - op: is
    path: event/DOMAIN_NAME
    value: www.celasllc.com
    case sensitive: false

NEW_TCP4_CONNECTION and NETWORK_SUMMARY

# If both sub-expression 1 AND 2 match.
op: and
rules:
  # Sub-expression 1
  # ==================
  # The event is coming from a Windows host and is either a
  # NEW_TCP_4_CONNECTION or a NETWORK_SUMMARY.
  - op: is windows
    events:
      - NEW_TCP4_CONNECTION
      - NETWORK_SUMMARY
  # Sub-expression 2
  # ==================
  # If either sub-expression 2.1 OR 2.2 match.
  - op: or
    rules:
      # Sub-expression 2.1
      # ==================
      # The source or destination (NEW_TCP_4_CONNECTION) is one of
      # those IPs.
      - op: matches
        path: event/?/IP_ADDRESS
        re: (196\.38\.48\.121|185\.142\.236\.226|80\.82\.64\.91|185\.142\.239\.173)
      # Sub-expression 2.2
      # ==================
      # The source or destination of one of the connections (NETWORK_SUMMARY) 
      # is one of those IPs.
      - op: matches
        path: event/PROCESS/NETWORK_ACTIVITY/?/IP_ADDRESS
        re: ^(196\.38\.48\.121|185\.142\.236\.226|80\.82\.64\.91|185\.142\.239\.173)$

Responding with D&R Rules

These rules were for the Detection (the part that indicates what a rule matches) part of the D&R rules, now we need to specify a Response (the part that specifies what to do when the Detection matches) for them.

In this case, until we build confidence that there will truly not be any false positives, we will simply Report the detections:

# This is just a list of actions to take.
# Report will generate a detection (that will be forwaded
# wherever you specified) that we name simply "jeus"
- action: report
  name: jeus

With these in place, we can be confident that if the implant, as descibed in the report executes on any of our machines, we'll be notified right away.

Hashes and VirusTotal

You may notice we have not mentioned hashes. The report does not contain any SHA256 hashes (the hashing used primarily by LimaCharlie.io), but beyond that, we rely on VirusTotal to provide this secondary signal.

As a user of LimaCharlie.io, if you configure your VirusTotal API Key, LimaCharlie.io will give you access to it as a Detection component (you can refer to the API as an operator). LimaCharlie.io will even perform caching of results for you.

This means that if we have a general D&R rule for VirusTotal we expect to be notified anyway if any executable on our machines matches with something in VT. This is an example general rule for VT that reports any executable flagged as malicious by two or more Anti Viruses:

# The "lookup" operator simply says to compare the value in "path" 
# with *something*.
op: lookup
# We specify we want to compare the CODE_IDENITY event's HASH.
event: CODE_IDENTITY
path: event/HASH
# And we want to compare the hash with a LimaCharlie Resource 
# (these can be APIs or threat feeds) called "api/vt" (the VirusTotal
# API resource).
resource: 'lcr://api/vt'
# This VT API returns metadata about which AntiVirus flagged the hash. We
# use the "metadata_rules" to apply further logic to the VT metadata.
metadata_rules:
  # If if length of the list containing AV products claiming the hash
  # to be malicious is greater than 1, we will return a positive match.
  # So if that list is of two or more items, we'll match.
  op: is greater than
  value: 1
  path: /
  # This indicates we do not want to compare the path "/" of the metadata
  # itself, but rather the length of the JSON element at that path.
  length of: true

The Spot Checks

Now that we have covered data that available by default, we could think about doing spot checks on our machines. This may or may not be worth it for you. If your organization is not usually the type of organization targeted by this APT, chances are that only doing "passive" checks as above is enough.

For the sake of this discussion however we will assume you do want to go the extra mile.

Here is what we can check for:

Yara Signatures

See jeus.yara at the end of this doc.

  • RC4 Key
  • PDB
  • Domain Name
  • User Agent
  • Registry Key
  • File Prefix

File Presence

  • C:\Windows\system32\uploadmgr*
  • msndll.dat
  • msndll.tmp
  • msncf.dat
  • C:\Recovery*.exe

Running SpotChecks

The following is a single SpotCheck for all the IOCs above. Note that it requires the limacharlie Python API version 2.0.0 or above, pip install limacharlie --upgrade should do it.

python -m limacharlie.SpotCheck --no-linux --no-macos --n-concurrent 3 --yara ./jeus.yara --file-pattern c:\\windows\\system32\\ "uploadmgr*" 0 --file-pattern c:\\ "msndll*" 2 --file-pattern c:\\recovery\\ "*.exe" 0

Let's analyze a bit this command line:

  • python -m limacharlie.SpotCheck simply instantiates the SpotCheck CLI tool within the Python API.
  • --no-linux --no-macos specifies that we only intend to scan our Windows hosts.
  • --n-concurrent 3 says the SpotCheck tool should scan 3 host at a time.
  • --yara ./jeus.yara specifies we want to do a system-wide scan (all files AND memory) using the Yara signature in this file (provided as attachment below).
  • --file-pattern c:\\windows\\system32\\ "uploadmgr*" 0 says we want to look for files in the c:\windows\system32\ (NOT subdirectories, the 0 is the max level of subdirectories) who's name begins with uploadmgr.
  • --file-pattern c:\\ "msndll*" 2 says we want to look for files who's name begins with msndll anywhere starting at the c:\ directory and up to two levels of subdirectories down.
  • --file-pattern c:\\recovery\\ "*.exe" 0 finally says we want to look for any .exe files in c:\recovery (no subdirectories).

As soon as we begin running this tool, it will begin scanning the hosts in our organization (via the REST interface) for those IOCs.

All matches and activity will be reported to STDOUT line by line:

  • A line starting with . (UUID): indicates the host with this agent ID (represented here as UUID) is done being scanned.
  • A line starting with ? (UUID): indicates that the host in question matches the type of hosts to scan, but it is currently offline, the SpotCheck tool will keep trying to reach it.
  • A line starting with X (UUID): some-error-information indicates that the check of this host did not finish because of an error.
  • A line starting with ! (UUID): {some-metadata} indicates that one of our IOCs has been found on the host. The exact metadata will depend on the type of IOC found.

Here is an example of a YARA signature match:

! (09530d37-2df9-4fd2-845e-80c0c06afaa3): {"yara": {"PROCESS": {"USER_ID": 0, "COMMAND_LINE": "/Library/Handsoff/HandsOffDaemon", "PROCESS_ID": 110, "USER_NAME": "root", "FILE_PATH": "/Library/Handsoff/HandsOffDaemon", "PARENT_PROCESS_ID": 1}, "RULE_NAME": "APT_FallChill_RC4_Keys", "PROCESS_ID": 110}}

Once the scan is finished, the SpotCheck tool will exit.

Yara Signature

rule APT_FallChill_RC4_Keys_and_files {
    meta:
        author = "Florian Roth"
        description = "Detect FallChill RC4 Keys, modified by Maxime Lamothe-Brassard"
        reference = "https://securelist.com/operation-applejeus/87553/"
        date = "2018-08-21"
    strings:
        $rc4_code = { C7 ?? ?? DA E1 61 FF
                      C7 ?? ?? 0C 27 95 87
                      C7 ?? ?? 17 57 A4 D6
                      C7 ?? ?? EA E3 82 2B }
        $pdb1 = "Z:\\jeus\\" nocase
        $pdb2 = "H:\\DEV\\TManager\\" nocase
        $domain1 = "www.celasllc.com" nocase
        $useragent1 = "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)" nocase
        $reg1 = "HKLM\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\TaskConfigs\\Description" nocase
        $fileprefix1 = "\\uploadmgr" nocase
    condition:
        uint16(0) == 0x5a4d and 1 of them
}

 
 
blog2.png