LC Agent 4.3.3 Changes

 

The LimaCharlie Agent is now available in version 4.3.3.

To upgrade your Organization, all you have to do is head to the "Sensor Downloads" section, and click the "Update to New Version" button. This will bring all of your agents up to the latest Stable version. If you notice any issues - although none are expected - you can always click the "Restore Previous Version" button to downgrade to the previous Stable version.

Now that the housekeeping is done, what's new?

  1. Quality of life bug fixes around directory listings and Yara system-wide scans.

    1. Investigation IDs are now propagated in the results of Yara scans.

    2. Directory listings now report full absolute file paths for every item, making recursive listing easier to interpret.

    3. Directory listings now interpret the file name pattern case insensitively on Windows.

  2. We've added a "reg_list" command to list the keys and values in a Windows registry on demand.

  3. We've added a "dir_find_hash" command to look for specific hashes in files given a starting directory, a file name pattern and a recursion depth.

These changes are mainly in support to the new LC Python API (2.0.0) and its new Spot Check capability to make organization-wide hunts for IOCs easier (see our other blog post dedicated to this topic).

 

Advanced Windows Events

 

LimaCharlie offers great cross-platform events. We strive to have events fired with the same meaning whether they are from a MacOS, Windows or Linux host. There is a time however where we put focus on very platform-specific events in order to facilitate in-depth detections. The following are specialized Windows events.

Remote Thread

The NEW_REMOTE_THREAD event indicates that a process created a thread remotely into another process. This is often used by malware to inject code into another process to make the malicious activity look like it's coming from a different process. In fact, we've written another blog article about it here.

Registry Operations

Windows has a unique system called the Registry. It is responsible for storing most of the configuration for the Operating System and applications installed. This makes it system of great interest for malware authors who use it to extract sensitive information or to set their malware to start covertly during Windows startup.

LimaCharlie supports 3 events, REGISTRY_CREATE, REGISTRY_DELETE and REGISTRY_WRITE. The CREATE event is generated whenever a process creates a new registry key while the DELETE is generated whenever a process deletes a registry key or value. The WRITE is generated whenever a process writes a new value to an existing key. Put together, these events give you a great insight into any registry usage by a process. Each of those events includes the unique process identifier that performed the action as well as the path to the relevant registry key.

Remote Process Handles

In Windows, a process (with the appropriate privileges) can open a Handle to another process. A Handle allows its owner to perform actions that were requested at Handle creation time. Many actions are possible, but the core ones of interest are Reading memory, Writing memory and Creating Threads.

Whenever a process creates a Handle with one of those access rights to another process, a REMOTE_PROCESS_HANDLE event is created. This event contains the unique process identifier of the creating process as well as the target process. Although this event is not an indicator of bad behavior in and of itself, it is a core part of better understanding malware behavior like lateral movement or credentials theft on Windows.

Example Usage

There has been great articles written on the subject which is linked below. Most use Sysmon as a reference event nomenclature. If you'd like to see the mapping of Sysmon events to LimaCharlie events, we have you covered here.

Serverless Endpoint Detection and Response
 

Asset Management in LimaCharlie

 

Although LimaCharlie was not meant specifically for Asset Management it is a great platform for acquiring ground truth information about assets. This information can be extremely valuable when evaluating whether a new vulnerability in a piece of software affects you, or knowing which assets a specific user accessed.

Available information includes:

Host name
Ex: database-server-1
From: contained in all events

OS Version
Ex: Windows 7 64bit
From: os_version command

Patch Level
Ex: Update for Microsoft Office 2010 (KB2597087) 64-Bit Edition)
From: os_packages command

Installed Packages
Ex: Microsoft SQL Server 2008 R2 Management Objects (x64) @ 10.50.1750.9
From: os_packages command

Users
Ex: stevejobs @ database-server-1
From: USER_OBSERVED events

pic235-1.jpg

LimaCharlie can be thought of as the "primary colors" of security which can be combined and utilized in a limitless number of ways to solve specific organizational challenges. Here are some possible scenarios:

Weekly Software Inventory
A weekly cron job can be run that uses the LimaCharlie Python API to gather a list of the installed packages and versions using the os_packages command (this script as a starting point). The results get stored in a small database. Using this approach anyone in the organization can query the database for vulnerable versions of software, event licensing auditing or any other of a number of scenarios.

Incident Response with Compromised User
At some point during an Incident Response, responders can become aware that the credentials of a specific user are compromised. the likely approach is to reset the credentials of the user, but using LimaCharlie we can also:

  • know if the user logs into any other assets in case the credentials are still valid somewhere, and

  • isolate any asset the user credentials were used to log into. This is easy and can be done in about 30 seconds and is effective immediately across your organization using D&R rules:

    • Detection (if user is observed and the agent is not tagged):

op: and
rules:
- op: is
path: event/USER_NAME
value: company\alice
event: USER_OBSERVED
- op: is tagged
tag: compromised-isolated
not: true

Response (isolate the computer on the network, tag the agent and alert):
- action: task
command: segregate_network
- action: add tag
tag: compromised-isolated
- action: report
name: compromised-credentials-observed

All of this is just scratching the surface of what you can do with LimaCharlie using Asset Management information. Have an idea? Want help to figure out the best way to get going? Drop us a line, we'll be happy to help.

Happy hunting!

 

Striking a Balance Between Data & Cost

 

The LimaCharlie agents can generate a lot of data and by default it does not all get sent back. This article will cover the different mechanisms available to select the type of data you want, where it can be filtered down and how to best build detections with this in mind.

Where is the data?

There are several control points for data in LimaCharlie.

  1. The agent is where all events are generated. A subset of the events are sent up to the cloud, the others are stored locally for a short period of time.

  2. The cloud receives the events from the agents and processes them with various analytic systems (like D&R rules).

  3. The outputs are simply the various forwarding locations you've selected for your data.

How does the agent deal with events?

The agent has a list of events that need to be sent to the cloud. This list is dynamic and controlled by the various "exfil_" commands. By issuing an "exfil_add X", you tell the agent to start sending events of type X to the cloud. You can optionally set a time expiration per event type. Doing an "exfil_get" gives you a list of events sent back. A default "exfil list" is automatically sent to your agents.

When an event is generated in the sensor, if it's in the "exfil" list, it's sent to the cloud. If it is NOT in that list, it is kept in memory. For how long? That depends, the capacity is set to a maximum of about 5000 events or 10MB of memory, so a busier host will result in a shorter buffer.

These events in memory are not lost. You can access them mainly via the "history_dump" command. When the agent receives it, it will sent back to the cloud either the entire content of this memory, or if specified all events in memory of a specific typFilter Events After Analysis

Now that the cloud has received events, it can use them to perform analytics. This is where each event (and stream of events) goes through all the Detection & Response rules you've setup. The next step is to forward those events to the outputs you've configured (like a Syslog endpoint for example).

Many users of LimaCharlie want to apply D&R rules onto data, but not necessarily have that data forwarded to their outputs. Storage can be expensive and although data from the agent to the cloud is very efficiently transferred, the data from the cloud to your output is much more verbose (and friendly) JSON. This can result in large amounts of data. This JSON compresses EXTREMELY well, but it can still be cumbersome to receive it all.

The solution to this problem is to use the blacklist/whitelist parameters in each the output. Adding an event type in the blacklist tells the output NOT to forward any events of that type while having an event type in the whitelist means that ONLY events of that type should be forwarded.

How Should I Select the Relevant Data?

The following is a general rule of thumb:

If you have no interest in a particular event type, both for storage as well as analytics, make sure it's not in your "exfil list" by using a D&R rule on event CONNECTED that sends the relevant "exfil_del" command to the agent. This stops it at the source.

If you want to use the event type in detections, similarly use a D&R rule to send an "exfil_add" (if the event type is not already present by default) to your agent.

Next, use the blacklist in the outputs for the events which you don't want to store yourself. This will make sure they don't get forwarded to you.

Of course most SOCs have somewhat more complex setups. For example many will send high-value events to highly-available storage like Splunk or ELK, and will send all other events to colder storage like Amazon S3. This helps to get the relevant data for operational purposes while keeping the cost down.

How do Detections Interact with Events?

The final point of discussion is around the use of events by D&R rules. Some of the events generated by the agent are extremely verbose and it's unlikely you want to send them to the cloud at all times because of bandwidth usage. This is where the power of the D&R rules comes through.

It's often advantageous to write D&R rules in two stages. The first stage is a rule that uses the default events that are always sent to the cloud. Its role is not to determine whether something is "bad" beyond all doubt, but rather to operate as a filter.

This first stage usually uses a "report" Response clause with "publish" to false, indicating the alert is not meant for human consumption. The other Response elements of these first stage rules is to issue commands to the agent to gather the extra data necessary. For example, it may issue a "history_dump FILE_CREATE" to retrieve all recent file creation events from the agent. If a clear state needs to be set clearly between the first and second stage, it is common to apply a Tag to the agent, like "suspicious-exec", which is used by the second stage to determine which agents the rule should be evaluated against.

The second stage rule's job is to look for those file creation events (for example) occurring on agents with the "suspicious-exec" tag for whatever specifically bad behavior is targeted. If found, then the Response component uses the "report" action to trigger an alert.

Conclusion

There are many strategies around data in LimaCharlie. It's easy to get started without having to worry about, but there is also a lot of power behind it to help optimize with MSSPs' infrastructure.

We're always happy to discuss these strategies with you so drop us a line!

Happy hunting.

 

A New Look

 

Coming up with a symbol to represent all of the knowledge, passion and hope that is poured into a company is no easy task.

LimaCharlie started out as an open source project. Along the way we made the decision to commercialize it so we could focus our efforts and change the landscape of capabilities available to managed security providers. Our technology company was formed from a big idea and lifetimes spent in the pursuit of innovation. 

Cyber space has become a modern battlefront and  we feel the weight of the responsibility that comes from making tools that are used to protect people. 

With all of that... I am very happy to present to you the symbol we have constructed to represent all that we are and hope to be.

logo_white.png
 

IP GeoLocation Rules

 

The LimaCharlie geolocation API (api/ip-geo) enables you to use geolocation information about an IP address as part of your real-time Detection & Response rules.

What does this mean? Your D&R rules can query geo information about an IP address as a lookup rule and then act based on its content.

This information can be used to generate important context for analysts. Global organizations often have employees traveling around the globe and it is no secret that bringing assets into certain countries can open them up to compromise. Being able to determine if an asset has been in country X during the last 3 months can be a useful piece of information when doing threat hunting or incident response.

Geolocation

An example of this kind of detection is as follows.

Detection:
op: and
rules:
- op: is tagged
tag: recently-in-china
not: true
event: CONNECTED
- op: lookup
resource: lcr://api/ip-geo
path: event/ext_ip
metadata_rules:
op: is
path: country/iso_code
value: CN
not: true

Response:
- action: add tag
tag: recently-in-china

This is of course just one type of usage. You could geo-fence certain users on sensitive assets using the network-isolation feature, you could redirect alerts to the relevant SOC based on location - the beauty of the D&R rules is that you can adapt and make them relevant to your organizations in ways we could never predict.

The full documentation on the geolocation and format of the data is available here.

Happy hunting!

 

Scanning with VirusTotal at Scale

 

VirusTotal is a great tool for security analysts and incident responders. It allows them to quickly scan a specific file using a plethora of different AntiVirus scanners and get a result immediately. 

VirusTotal has a free API key but this tier restricts the user to a maximum number of queries per minute. As a paying customer of VirusTotal you are given a much greater limit. On both tiers it is possible to query VirusTotal programatically.

VirusTotal can be made incredibly effective as a first pass of detection in combination with further validations. For this approach to work you need to examine the hash of files and code in use within your organization using the VirusTotal API. This can be challenging for organizations as it requires a complex pipeline of analysis and reporting.

Enters LimaCharlie - the easiest way to get the job done.

Using LimaCharlie.io, you can deploy agents to your hosts (Mac, Linux and Windows) in minutes and get data flowing to your own systems right away. You can also input a VirusTotal API key and have LimaCharlie query VirusTotal automatically for you. Along with this ease of deployment we also take care of caching results so if you pay for an API key you get more mileage out of your quota.

If we find a match we can report it through many integrations like Slack or webhooks, and you can even automate more advanced responses like isolating the host from the network or fetching additional information. And of course, all of this happens in real-time.

Virus Total

Getting up and running is simple, create a LimaCharlie account, install some sensors by following the instructions, subscribe to the VirusTotal integration (it's free), input your VirusTotal API key and create your detection flow.

This is what a simple detection flow for VirusTotal looks like, query all unique hashes against VT and report any hash that at least one AV product says is "bad":

Detect:

op: lookup
event: CODE_IDENTITY
path: event/HASH
resource: 'lcr://api/vt'
metadata_rules:
path: /
length of: true
value: 0
op: is greater than

Respond:

- action: report
name: virustotal

 

 

Organizing Detection & Response Rules

 

Serverless Detection & Response rules are a game changer for most of our customers. Being able to deploy a rule within seconds that immediately takes effect which provides the ability to interact, investigate or mitigate using the LimaCharlie agent will do that to you.

An aspect that is often overlooked initially though is the organization of the rules themselves. This is important for most organizations and it is critical for Managed Security Service Providers that manage multiple other organizations.

Having a clear reference of which rules are running where - and which version of those rules - is critical to smooth operations.

If only you had a solution for this that didn't force you into some sub-par vendor-mandated interface. If only you could use Git (or your favorite source code repository), since after all it's the perfect system for tracking configuration through time and versions.

Organizing Detections

Enters the LimaCharlie Python API. This API provides you the ability to get specific feeds of live data from your agents, query your LimaCharlie configuration, change it, send tasks to agents etc.

New to the Python API is a Sync functionality. It's available as a pure API but we'll discuss the command line portion here since it is easier to explain.

This tool allows you to download your current LimaCharlie Detection & Response rules configuration to a config file, and to do the reverse by pushing the rules into your organization.

What makes this particularly useful is also the ability to format your configuration files using an "include" statement which allows you to create hierarchy of rules, combining in whatever way you see fit.

Let's see a quick example:

LCConf (the default config file name)

version: 1
include:
- subsets/secondary.yml
rules:
VirusTotal:
detect:
event: CODE_IDENTITY
metadata_rules:
length of: true
op: is greater than
path: /
value: 0
op: lookup
path: event/HASH
resource: lcr://api/vt
respond:
- action: report
name: virustotal

subsets/secondary.yml

version: 1
rules:
win-suspicious-exec-name:
detect:
op: external
resource: lcr://detection/win-suspicious-exec-name
name: win-suspicious-exec-name
respond:
- action: report
name: win-suspicious-exec-name
- action: task
command: history_dump

The top file defines a single D&R rule named "VirusTotal", but it also includes a file in the "subsets" directory called "secondary.yml". This secondary file contains a detection called "win-suspicious-exec-name". So if you do a "push" using Sync, those files will get combined and put into effect in your organization.

The configuration files are YML. Since the files get combined at push time, it means you can maintain them in a repository independently and tweak them as a team.

How exactly do you fetch the config and push the rules? Doesn't get any easier:

Download the configs locally:

python -m limacharlie.Sync ORGANIZATION-ID fetch

Push the local config to the cloud:

python -m limacharlie.Sync ORGANIZATION-ID push

The sync tool also supports arguments like --dry-run and --force. For a full description see the documentation here.