Basic indicators such as IP addresses and domain names can be valuable for defending against a specific version of malware, but their value can be short-lived, since attackers are adept at quickly moving to different addresses or domains. Indicators based on content, on the other hand, tend to be more valuable and longer lasting, since they identify malware using more fundamental characteristics.
Signature-based IDSs are the oldest and most commonly deployed systems for detecting malicious activity via network traffic. IDS detection depends on knowledge about what malicious activity looks like. If you know what it looks like, you can create a signature for it and detect it when it happens again. An ideal signature can send an alert every time something malicious happens (true positive), but will not create an alert for anything that looks like malware but is actually legitimate (false positive).
One of the most popular IDSs is called Snort. Snort is used to create a signature or
rule that links together a series of elements (called rule options) that must
be true before the rule fires. The primary rule options are divided into those that identify content
elements (called payload rule options in Snort lingo) and those that identify
elements that are not content related (called nonpayload rule options).
Examples of nonpayload rule options include certain flags, specific values of TCP or IP headers, and
the size of the packet payload. For example, the rule option flow:established,to_client
selects packets that are a part of a TCP session that
originate at a server and are destined for a client. Another example is dsize:200
, which selects packets that have 200 bytes of payload.
Let’s create a basic Snort rule to detect the initial malware sample we looked at
earlier in this chapter (and summarized in Table 14-1). This malware generates network traffic consisting of an HTTP GET
request.
When browsers and other HTTP applications make requests, they populate a User-Agent header
field in order to communicate to the application that is being used for the request. A typical
browser User-Agent starts with the string Mozilla
(due to
historical convention), and may look something like Mozilla/4.0
(compatible; MSIE 7.0; Windows NT 5.1)
. This User-Agent provides information about the
version of the browser and OS.
The User-Agent used by the malware we discussed earlier is Wefa7e
, which is distinctive and can be used to identify the malware-generated traffic.
The following signature targets the unusual User-Agent string that was used by the sample run from
our malware:
alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"TROJAN Malicious User-Agent"; content:"|0d 0a|User-Agent: Wefa7e"; classtype:trojan-activity; sid:2000001; rev:1;)
Snort rules are composed of two parts: a rule header and rule options. The rule header
contains the rule action (typically alert
), protocol, source and
destination IP addresses, and source and destination ports.
By convention, Snort rules use variables to allow customization of its environment: the
$HOME_NET
and $EXTERNAL_NET
variables are used to specify internal and external network IP address ranges, and $HTTP_PORTS
defines the ports that should be interpreted as HTTP traffic.
In this case, since the ->
in the header indicates that the
rule applies to traffic going in only one direction, the $HOME_NET any
-> $EXTERNAL_NET $HTTP_PORTS
header matches outbound traffic destined for HTTP
ports.
The rule option section contains elements that determine whether the rule should fire. The inspected elements are generally evaluated in order, and all must be true for the rule to take action. Table 14-2 describes the keywords used in the preceding rule.
Within the content
term, the pipe symbol (|
) is used to indicate the start and end of hexadecimal notation. Anything
enclosed between two pipe symbols is interpreted as the hex values instead of raw values. Thus,
|0d 0a|
represents the break between HTTP headers. In the sample
signature, the content
rule option will match the HTTP header
field User-Agent: Wefa7e
, since HTTP headers are separated by the
two characters 0d
and 0a
.
We now have the original indicators and the Snort signature. Often, especially with automated analysis techniques such as sandboxes, analysis of network-based indicators would be considered complete at this point. We have IP addresses to block at firewalls, a domain name to block at the proxy, and a network signature to load into the IDS. Stopping here, however, would be a mistake, since the current measures provide only a false sense of security.
A malware analyst must always strike a balance between expediency and accuracy. For network-based malware analysis, the expedient route is to run malware in a sandbox and assume the results are sufficient. The accurate route is to fully analyze malware function by function.
The example in the previous section is real malware for which a Snort signature was created
and submitted to the Emerging Threats list of signatures. Emerging Threats is a set of
community-developed and freely available rules. The creator of the signature, in his original
submission of the proposed rule, stated that he had seen two values for the User-Agent strings in
real traffic: Wefa7e
and Wee6a3
. He submitted the following rule based on his observation.
alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"ET TROJAN WindowsEnterpriseSuite FakeAV Dynamic User-Agent"; flow:established,to_server; content:"|0d 0a|User-Agent: We"; isdataat:6,relative; content:"|0d 0a|"; distance:0; pcre:"/User-Agent: We[a-z0-9]{4}x0dx0a/"; classtype:trojan-activity; reference:url,www.threatexpert.com/report.aspx?md5= d9bcb4e4d650a6ed4402fab8f9ef1387; sid:2010262; rev:1;)
This rule has a couple of additional keywords, as described in Table 14-3.
Table 14-3. Additional Snort Rule Keyword Descriptions
Description | |
---|---|
| Specifies characteristics of the TCP flow being inspected, such as whether a flow has been established and whether packets are from the client or the server |
| Verifies that data exists at a given location (optionally relative to the last match) |
| Modifies the |
| A Perl Compatible Regular Expression that indicates the pattern of bytes to match |
| A reference to an external system |
While the rule is rather long, the core of the rule is simply the User-Agent string where
We
is followed by exactly four alphanumeric characters (We[a-z0-9]{4}
). In the Perl Compatible Regular Expressions (PCRE) notation
used by Snort, the following characters are used:
Square brackets ([
and ]
) indicate a set of possible characters.
Curly brackets ({
and }
)
indicate the number of characters.
Hexadecimal notation for bytes is of the form x
HH
.
As noted previously, the rule headers provide some basic information, such as IP address (both
source and destination), port, and protocol. Snort keeps track of TCP sessions, and in doing so
allows you to write rules specific to either client or server traffic based on the TCP handshake. In
this rule, the flow
keyword ensures that the rule fires only for
client-generated traffic within an established TCP session.
After some use, this rule was modified slightly to remove the false positives associated with the use of the popular Webmin software, which happens to have a User-Agent string that matches the pattern created by the malware. The following is the most recent rule as of this writing:
alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"ET TROJAN WindowsEnterpriseSuite FakeAV Dynamic User-Agent"; flow:established,to_server; content:"|0d 0a|User-Agent|3a| We"; isdataat:6,relative; content:"|0d 0a|"; distance:0; content:!"User-Agent|3a| Webmin|0d 0a|"; pcre:"/User-Agent: We[a-z0-9]{4}x0dx0a/"; classtype:trojan-activity; reference:url,www.threatexpert.com/report.aspx?md5=d9bcb4e4d650a6ed4402fab8f9 ef1387; reference:url,doc.emergingthreats.net/2010262; reference:url,www.emer gingthreats.net/cgi-bin/cvsweb.cgi/sigs/VIRUS/TROJAN_WindowsEnterpriseFakeAV; sid:2010262; rev:4;)
The bang symbol (!
) before the content expression (content:!"User-Agent|3a| Webmin|0d 0a|"
) indicates a logically inverted
selection (that is, not), so the rule will trigger only if the content
described is not present.
This example illustrates several attributes typical of the signature-development process.
First, most signatures are created based on analysis of the network traffic, rather than on analysis
of the malware that generates the traffic. In this example, the submitter identified two strings
generated by the malware, and speculated that the malware uses the We
prefix plus four additional random alphanumeric characters.
Second, the uniqueness of the pattern specified by the signature is tested to ensure
that the signature is free of false positives. This is done by running the signature across real
traffic and identifying instances when false positives occur. In this case, when the original
signature was run across real traffic, legitimate traffic with a User-Agent of Webmin
produced false positives. As a result, the signature was refined by
adding an exception for the valid traffic.
As previously mentioned, traffic captured when malware is live may provide details that are difficult to replicate in a laboratory environment, since an analyst can typically see only one side of the conversation. On the other hand, the number of available samples of live traffic may be small. One way to ensure that you have a more robust sample is to repeat the dynamic analysis of the malware many times. Let’s imagine we ran the example malware multiple times and generated the following list of User-Agent strings:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This is an easy way to identify random elements of malware-generated traffic. These results
appear to confirm that the assumptions made by the official Emerging Threats signature are correct.
The results suggest that the character set of the four characters is alphanumeric, and that the
characters are randomly distributed. However, there is another issue with the current signature
(assuming that the results were real): The results appear to use a smaller character set than those
specified in the signature. The PCRE is listed as /User-Agent:
We[a-z0-9]{4}x0dx0a/
, but the results suggest that the characters are limited to
a–f rather than
a–z. This character distribution is often used when
binary values are converted directly to hex representations.
As an additional thought experiment, imagine that the results from multiple runs of the malware resulted in the following User-Agent strings instead:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
While the signature may catch some instances, it obviously is not ideal given that whatever is
generating the traffic can produce Wf
and W1
(at least) in addition to We
. Also,
it is clear from this sample that although the User-Agent is often six characters, it could be seven
characters.
Because the original sample size was two, the assumptions made about the underlying code may have been overly aggressive. While we don’t know exactly what the code is doing to produce the listed results, we can now make a better guess. Dynamically generating additional samples allows an analyst to make more informed assumptions about the underlying code.
Recall that malware can use system information as an input to what it sends out. Thus, it’s helpful to have at least two systems generating sample traffic to prevent false assumptions about whether some part of a beacon is static. The content may be static for a particular host, but may vary from host to host.
For example, let’s assume that we run the malware multiple times on a single host and get the following results:
|
|
|
|
|
|
|
|
|
|
|
|
Assuming that we didn’t have any live traffic to cross-check with, we might mistakenly write a rule to detect this single User-Agent. However, the next host to run the malware might produce this:
|
|
|
|
|
|
|
|
|
|
|
|
When writing signatures, it is important to identify variable elements of the targeted content so that they are not mistakenly included in the signature. Content that is different on every trial run typically indicates that the source of the data has some random seed. Content that is static for a particular host but varies with different hosts suggests that the content is derived from some host attribute. In some lucky cases, content derived from a host attribute may be sufficiently predictable to justify inclusion in a network signature.