How to Score an Analytic
These are the steps that should be taken to score an analytic with the Summiting the Pyramid methodology. This walkthrough is based on the analytic for suspicious pipe creation from CobaltStrike, authored by Florian Roth (Nextron Systems) and Christian Burkard (Nextron Systems).
title: CobaltStrike Named Pipe Patterns
id: 85adeb13-4fc9-4e68-8a4a-c7cb2c336eb7
status: test
description: Detects the creation of a named pipe with a pattern found in CobaltStrike malleable C2 profiles
references:
- https://svch0st.medium.com/guide-to-named-pipes-and-hunting-for-cobalt-strike-pipes-dc46b2c5f575
- https://gist.github.com/MHaggis/6c600e524045a6d49c35291a21e10752
author: Florian Roth (Nextron Systems), Christian Burkard (Nextron Systems)
date: 2021/07/30
modified: 2022/10/09
tags:
- attack.defense_evasion
- attack.privilege_escalation
- attack.t1055
logsource:
product: windows
category: pipe_created
definition: 'Note that you have to configure logging for Named Pipe Events in Sysmon config (Event ID 17 and Event ID 18). The basic configuration is in popular sysmon configuration (https://github.com/SwiftOnSecurity/sysmon-config), but it is worth verifying. You can also use other repo, e.g. https://github.com/Neo23x0/sysmon-config, https://github.com/olafhartong/sysmon-modular You can also use other repo, e.g. https://github.com/Neo23x0/sysmon-config, https://github.com/olafhartong/sysmon-modular. How to test detection? You can always use Cobalt Strike, but also you can check powershell script from this site https://svch0st.medium.com/guide-to-named-pipes-and-hunting-for-cobalt-strike-pipes-dc46b2c5f575'
detection:
selection_malleable_profiles:
- PipeName|startswith:
- '\mojo.5688.8052.183894939787088877'
- '\mojo.5688.8052.35780273329370473'
- '\mypipe-f'
- '\mypipe-h'
- '\ntsvcs'
- '\scerpc'
- '\win_svc'
- '\spoolss'
- '\msrpc_'
- '\win\msrpc_'
- '\wkssvc'
- '\f53f'
- '\windows.update.manager'
- '\SearchTextHarvester'
- '\DserNamePipe'
- '\PGMessagePipe'
- '\MsFteWds'
- '\f4c3'
- '\fullduplex_'
- '\rpc_'
- PipeName:
- '\demoagent_11'
- '\demoagent_22'
selection_malleable_profile_CatalogChangeListener:
PipeName|startswith: '\Winsock2\CatalogChangeListener-'
PipeName|endswith: '-0,'
filter:
PipeName:
- '\wkssvc'
- '\spoolss'
- '\scerpc'
- '\ntsvcs'
- '\SearchTextHarvester'
- '\PGMessagePipe'
- '\MsFteWds'
condition: 1 of selection_malleable_profile* and not filter
falsepositives:
- Chrome instances using the exact same pipe name "mojo.something"
level: high
Step 1: Scoring the analytic’s sensor data
Just as not all analytics are created equal, not all sensors are created equal. Our event robustness categories identify the different layers within the OS in which observables can be collected. Each of the different events within each column provide different insights into the OS.
In the pipe creation example, the sensor data identified is Windows, and the category is
pipe_created
. Based on the types of Event IDs Windows provides and a list of field
names which belong to Event IDs, we know that the analytic is made for Sysmon logs.
Based on past research, emulation, and Microsoft documentation, we understand that Event
ID 17 is fired after ImpersonateNamedPipeClient is called, which is a Column U: User-Mode
function. 2 However, after some additional research, it was found that certain
Sysmon events are triggered with a minifilter. 3 Minifilters are executed by the
Filter manager, operating in kernel-mode. 4 Because of this, the data sensor
placement of this analytic will be in Column K: Kernel-Mode.
Important
Some analytics might be vendor tool agnostic. If the field names can be applied to various different tools, be sure to score it in the sensor that you will be using in your environment. Keep this consistent as you look at each of the individual analytics.
Step 2: Break down each of the observables
Each of the observables, or the individual components of an analytic, will have its own score. Scoring each individually will help us define a composite score for the whole analytic. Some observable scores will be more strongly determined by the fields searched, while others will be more influenced by the values specified.
Next, the selection_malleable_profiles
and
selection_malleable_profile_CatalogChangeListener
selections look for a pipe name
used by CobaltStrike or certain Windows tools. Since the pipe names specified look to be
used by CobaltStrike, this initially seems like a level 2 dependency, being at the
Level 2: Core to Adversary-Brought Tool level, since it can be changed by the adversary. However,
upon closer inspection, these are actually not the names utilized by CobaltStrike
tooling. For example, the pipe name ntsvcs
is meant to be mistaken for the ntsvc
used by Windows Task Manager. In addition to somewhat similar pipe names, these pipe
names can be easily changed by the adversary, requiring little effort on their part
without reconfiguring the tool. Due to this fact, the group of analytics is scored at a
Level 1: Ephemeral Values.
The last component of the analytic is a filter. This is used to improve the precision of the analytic, lowering the amount of false positives which are generated. As of right now, the scoring of the analytic using the Summiting the Pyramid methodology is focused solely on the robustness of an analytic, not precision or recall. Because of this, filter sections will not be scored.
Step 3: Analyze the selection or condition of the analytic
Before scoring the analytic, the final step is to consider how the separate components of an analytic are related. Understanding the logic of an analytic will help determine how robust it is.
In Sigma specifically, there are two steps which need to be looked at to understand the robustness of the full analytic, with all components combined as specified. It will say if each of the selections need to be considered as an AND statement or an OR statement. There can also be a condition at the bottom of an analytic which might identify a filter that needs to be applied to the analytic.
For our example, the condition states that this analytic will fire if any one of the
selection_malleable_profile*
conditions is met, unless the filter condition is also
true. There are four sections in selection_malleable_profile
: PipeName |
startswith
, PipeName
, selection_malleable_profile_CatalogChangeListener
, and
PipeName | endswith
. The observables within each of the selections are connected
using an AND. The condition states that at least 1 of the
selection_malleable_profile*
will be selected, making each of the selections
connected by an OR. So, the final analytic would look like this:
(selection_malleable_profiles: Pipename | startswith AND Pipename) OR (selection_malleable_profile_CatalogChangeListener: Pipename | startswith AND Pipename | endswith) = 1 AND 1 = 1
The “not filter” indicates that anything that is not in the filter will be detected. Based on the Summiting the Pyramid methodology, analytic components that are AND’ed together, will fall to the score of the lowest observable.
Important
To read more about AND and OR condition scoring check out the release on Combining Observables
Step 4: Give the analytic a final score
Now that we understand the individual components of this analytic, we can now score the overall analytic with the robustness level.
The sensor data was placed at the kernel-mode level, placing the score of the final analytic in Column K: Kernel-Mode. The individual observables were all scored as ephemeral values, placing them at Level 1: Ephemeral Values. The filter used to increase precision of the analytic has not been scored. The condition logic of the analytic indicates the relationships between the observables will be scored as an AND condition. The AND condition makes the individual observables dependent on the lowest level observable being fulfilled, putting the observables at Level 1. Therefore, the robustness score of this analytic is 1K.
This is how you can place the score using the 2D model diagram.
And that’s it! You have officially scored an analytic based on the Summiting the Pyramid methodology. Knowing the steps to score an analytic, you can apply this to your environment, see where your analytics fall, and determine if there are any ways your analytics can be improved.
Remember, not all analytics can be scored utilizing this methodology. For example, some analytics might be tuned specifically for your environment or for collecting contextual data rather than detection. We are documenting different use cases where some analytics would not be scored, and will continue to update the Summiting methodology to reflect this.
Do you have analytics that should be documented in the analytic repository? Do you have new fields or observables which can be added to the analytics observables table? Submit a request to our team.
References