6
Engage Attackers with Active Defense

Companies can protect their important assets and make sure that enterprise-wide policies and structures are in place to minimize risk, but attackers are still out there. They are increasingly well funded and sophisticated, supported by a well-developed marketplace for malware and other tools for infiltrating networks, and using innovative tactics such as multistep attacks, misdirection, and stealthier malware, all designed to defeat corporate defenses.1 As a result, companies’ cybersecurity approach has to move from passive to active defense.

Passive defense means putting in place protections to keep attackers away from sensitive information assets. In a passive defense model, companies use security operations centers (SOCs) to monitor and manage their defense mechanisms. In military terms, the Maginot Line—the Second World War fortifications along the French/German border—was a passive defense strategy.

Active defense means engaging attackers long before they might succeed in causing a breach. The Royal Air Force embraced active defense when it used the new technology of radar to identify Luftwaffe raids while the German planes were still over the English Channel. The alerts enabled the RAF to dispatch aircraft to disrupt these attacks before they reached British cities.

The basic passive defense capabilities that a traditional SOC offers are utterly essential, but companies also have to turn on their radar; they have to create active defenses to engage attackers, gather intelligence, divert attention from the valuable assets, and tune defenses in real time. Attackers will not wait, so companies need to simultaneously establish basic SOC capabilities and learn how to engage in active defense.

THE LIMITATIONS OF PASSIVE DEFENSE

Companies started creating SOCs to deal with the deluge of security data generated by disparate systems, platforms, and applications—from business applications and identity and access management (I&AM) platforms to antivirus tools, intrusion detection system (IDS) devices, and firewalls.2 They integrated all this information, at least to an extent, into security incident and event management (SIEM) tools that provide aggregation, correlation, alerting, and reporting capabilities.3

SOC activity is triggered when a sensor detects the signature of an action that is known to be bad; this might be an access request from an Internet Protocol (IP) address associated with cyber-criminals or a piece of code that has a known malware signature embedded within. When the alert sounds, an analyst determines whether it signifies a legitimate threat or a false alarm. This triage function can account for up to 70 percent of an analyst’s time. If they believe the threat to be a genuine risk, then it is escalated and addressed by SOC analysts with the appropriate level of expertise.

In short, SOC analysts review alerts, filter out false positives, determine the level of severity, and request action to remediate the issue—reimaging a server that has been infected with malware, for example. In some cases, the SOC team may recommend that the company launch its incident response process in order to address a significant breach in the environment that is compromising sensitive data.4

SOCs have provided tremendous value for companies that have implemented them. The very process of building an SOC often reveals gaps in a company’s perimeter defenses and highlights where it needs to enhance its antimalware, web filtering, intrusion detection, and firewall infrastructure.5 Once up and running, SOCs bring together security data and expertise in one place, which means companies miss fewer security events and catch them earlier. The basic concept has now reached a stage where a robust market for managed SOC services exists, and many companies choose to outsource the capability rather than build it themselves.

This SOC approach certainly helps reduce risk, but it is severely limited, especially in the face of determined and creative attackers. Analysts engage only when prompted by an alert from the network sensors and then respond with the same level of energy to every alert, which means wasting considerable time filtering out those false alarms when they could be addressing real issues. Organizations do not have enough staff for that luxury. Most do not even have enough analysts to review all of the incidents that occur, which means the network sensors end up being tuned to generate only the number of alerts that analysts can manage. This means that a host of potential incidents are never scrutinized at all. Even with today’s advanced analytic approaches, it is not possible to act in real-time to block all the potentially serious threats that are detected.

SOC operations rely on tools but if the tools are poorly configured they will not see everything they should. Add in the fact that the most sophisticated attackers already know how to circumvent the most popular security applications, and it is clear that companies’ safety nets have some big holes.

Perhaps the most important limitation however, is that that SOCs work on the principle of looking for fingerprints of malware that is already known. This signature-based approach provides no protection against zero-day exploits, for which no known signature exists. Even if a firm consistently has up-to-date virus definitions and good cyber-hygiene practices across its networks, adversaries can use new zero-day exploits and techniques such as spear-phishing to defeat or circumvent such signature-based defenses. Some criminal hackers already have advanced tradecraft, and the black market for advanced tool kits means that companies have to defend themselves against an ever wider range of adversaries. In 2013, for example, security firm Secunia reported that the 25 most popular software programs contained 9 zero days.6 Assuming a monthly vendor patch release schedule, companies relying on a signature-based approach are exposed to previously unknown threats on 270 days out of 365.

The final limitation of the signature-based approach is that it cannot tackle the growing threat from insiders. Insider threats range from operator error (e.g., failure to update virus definitions and sustain firewalls) to user error (e.g., being spear-phished by clicking on the wrong link in an e-mail), to malevolent actions spurred by a range of motivations. CISOs must adopt an active approach to mitigating such insider threats, just as they must do for many external threats. In short, SOCs are woefully inadequate at coping with the multitude of cybersecurity threats faced by companies today, and companies need to deploy active defenses to engage attackers.

KNOW THE ENEMY AND ACT ACCORDINGLY

Active defense uses both manual and automated processes to not only detect but also deceive, deter, and manage attackers. In some cases this means stopping and expelling them when they are discovered in the network; in others, it means actively engaging them on the network to monitor their actions, developing additional intelligence on their methods and keeping them occupied so they cannot inflict harm. It also means having the ability to tune defenses in real-time to thwart emerging attacks.

As tempting as it might be, active defense should not include hacking back against adversaries. Cyber-vigilantism may seem an appealing way to retaliate, but it is illegal in most jurisdictions. In addition, given the challenges in attributing attacks (driven in part by attackers using others’ infrastructure), a reverse hack could easily damage an unwitting third party, with all the legal and reputation exposure that entails.

Adopting an active approach to defense has three appreciable benefits. First, it makes better use of existing people. Second, it allows organizations to focus on the specific hackers who present the greatest threat. Finally, it allows a hypothesis-based approach that allows the security team to come to a better answer faster than the signature-based approach. It is a more sophisticated defense strategy at a time when any company with valuable data or with accounting or financial systems linked to the web must assume that it will be targeted and, in many cases, will be penetrated.

Taking an active defense stance means taking four actions:

  1. Maintain up-to-date intelligence.
  2. Mitigate insider threats.
  3. Engage the adversary on the organization’s own networks.
  4. Partner to mitigate external threats.

Each element requires investments in both technology and capabilities, and the four must be pursued aggressively and integrated into a comprehensive active defense program even as companies work on improving their basic cyber-hygiene and incident response capabilities.

Maintain Up-to-Date Intelligence

The only thing proliferating faster than cybersecurity threats is the number of companies providing information about those threats. There is a trend for companies to turn to third parties—both state bodies and commercial operators—for more information on cybersecurity threats. However, most companies lack the internal resources to then separate the wheat from the chaff or, more importantly, to use this data to make effective operational decisions.

Companies that have a strong active defense program need to develop an internal intelligence function that is integrated with the cybersecurity operations team (Figure 6.1). The intelligence function has five elements (commonly referred to as the intelligence cycle).

images

FIGURE 6.1 Integrate a Proactive Cyber-Intelligence Function with the Security Operations Team

The first step in the intelligence cycle is to define requirements. In the cybersecurity world, this means developing educated hypotheses about what threats your organization is most likely to face, who the attackers are that have the capability and intent to target your organization, and what techniques they typically employ. These questions help the threat intelligence function identify its intelligence gaps—the specific information it needs but does not yet have in order to identify and counter adversaries effectively.

Once it knows what information it needs, the threat intelligence team needs to find the right internal and external sources for that information. Internally, this requires installing network sensors to detect the specific indicators of potential suspicious activity. This may involve actions such as setting an IDS to look for indicators that are specific to a known adversary’s typical tactics, techniques, and procedures (TTPs—the cybersecurity term for what would be known as an “MO” or modus operandi for other criminals), and monitoring internal user behavior to identify potential patterns of compromise. Priority should be given to alerts that most closely resemble the TTPs of the groups the organization is most concerned about in order to focus on the most valuable data from what can be a multitude of network sensors. Advanced organizations also employ hunting teams and create internal deceptive sandboxes (more on these later) to identify attackers and lure them into using additional tools, thereby increasing the information about their TTPs.

The quantity of external cybersecurity intelligence is rapidly expanding with maturing third-party vendors, government agencies, and industry associations such as the FS-ISAC in financial services that are starting to serve as clearing houses and centralized repositories for threat information. The information all these bodies can provide may include specific threat profiles, attack “vectors” (specific paths hackers take to get inside a network) or contextual information on threats. Broader, real-world information also helps organizations make sense of the threat, and understand attackers, their motivations and the political/legal environment in which they operate. Intelligence teams should make sure they collect both technical and this contextual information. Forward-looking organizations are even collecting threat information directly from the hacker “dark net”—those underground networks that sell the attack tools, share vulnerability information and attacker tool kits, and seek to profit from compromised information. Getting inside these hacker forums allows sophisticated organizations to collect threat intelligence directly from some of their most determined adversaries.

The information is only one part of the intelligence; analyzing it is the critical step. Dedicated analysts must pore over the data in order to draw out insights, context, and actions. This type of analytic activity attempts to correlate actions and produce perspectives on real and anticipated threats. It provides internal decision makers with perspectives on the risks facing the organization; it provides defenders with queuing information that allows them to tackle threats in real time; and finally, it allows the organization to prioritize its actions in order to position its defenses most effectively and make informed business decisions. In short, intelligence drives operations.

The final arc in the intelligence cycle is the dissemination of the intelligence. This may include strategic threat bulletins to help business leaders make effective business or investment decisions, alerts to help the cybersecurity team orient its decisions or tactical critical notifications that result in direct actions on the network (e.g., closing a port, blocking an IP address, or taking direct action against an entity on the network).

A cyber-intelligence function should not only aggregate threat information and develop strategic threat assessments; it also needs to develop hypotheses about the most likely intrusion tools and targets. These hypotheses, updated over time, guide cybersecurity specialists to where they need to focus their attention. As the cyber-intelligence function matures, organizations can refine their hypotheses into much more detailed definitions of the threats targeting their network. The intelligence gleaned may not be sufficient to attribute malevolent activities to a specific actor, but the TTPs of different adversaries will increasingly differentiate them and allow cybersecurity operators to tailor their defenses to each identified attacker.

The most sophisticated organizations gather enough information to devise playbooks for each adversary that can become an entire defensive campaign. Sustained success depends on consistently updating intelligence and tweaking the playbook accordingly. Analysts review the outcomes of each playbook action to refine the campaign still further, incorporating new insights to develop actions that are particularly effective against each adversary.

Making threat intelligence actionable in this way requires a different way of doing business. The traditional approach of feeding intelligence into a managed SOC will not get the job done because it overwhelms the organization and triggers alerts based on every piece of intelligence regardless of its applicability to the organization. More importantly, it does not focus the actions of the cybersecurity operators and there is no feedback loop to determine the quality of the intelligence and supplement it with additional findings. Instead, organizations need to embed the intelligence analysts with the cybersecurity team, accelerating the deployment of knowledge and enabling faster decision making.

The increase of internal sensors and external sources of threat intelligence means organizations have to dramatically increase their use of automation and advanced analytics to avoid being swamped by this tidal wave of information. Given the scarcity of human talent to separate the signals from the noise, many organizations are turning to complex algorithms to correlate known and unknown events, search for anomalies and filter out false alarms. Following the Target breach, for example, there was a rash of reports describing how the initial indications of the compromise were identified but dismissed by the security operations team because they were buried by reams of similar alerts that posed no real threat.7

Mitigate Insider Threats

Most companies have been focused on the external threat and, and as a result, most defenses are oriented toward external actors. By default, outsiders are not trusted. Conversely, insiders are trusted and must be trusted. From system administrators who can access raw databases, to cybersecurity personnel who have access to usage logs and credentials, to customer services staff with access to client files, insiders need access to sensitive information to do their jobs properly. Insiders not only have access, but also have the contextual information to locate and exploit especially sensitive and valuable information. As we said earlier, the easiest way to get hold of valuable information assets is to badge into the building in the morning and use valid credentials to log in to a secure system.

Many companies are realizing that they have underinvested in addressing insider threats for a number of reasons. The media has often focused on external, overseas attackers; addressing insider threats is harder and requires more insight into processes and collaboration with business partners; and addressing insider threats raises uncomfortable HR issues, especially in companies that emphasize a trust culture.

Organizations need to worry about three types of insiders, all of whom have access to critical data or systems. The errant insider is unaware of security protocols, ignores cybersecurity policies, or simply commits errors resulting in sensitive data being compromised or networks becoming vulnerable. The hijacked insider has their credentials compromised by someone externally, giving the outsider the same level of access as the insider. Finally, the malevolent insider is willing to steal or compromise data for personal gain. Companies must use active defense strategies to address each type of insider.

Errant Insiders

Users can easily (surprisingly easily) accidentally send files with privileged information to customers, vendors, and other third parties. They can forward sensitive e-mails to dozens or hundreds of people who have no need to read them. They download files to USB drives and upload documents to consumer-grade web services that have—at best—vague security policies.

Naturally, all the mechanisms for changing the mind-sets and behaviors described in Chapter 4 are important, but they must be backed up by an operational model that can identify an errant act or errant behavior.

Data loss protection (DLP) tools can stop sensitive data from being e-mailed externally, uploaded to websites, downloaded to external drives, or even printed. This all helps stop careless employees from sharing information inappropriately. For highly structured information, this is relatively straightforward—it is not difficult to set rules that tell a DLP tool to look for customer account data, for example.

Stopping unstructured data from being lost is much more complicated and can require sophisticated cybersecurity analytics. Business plans, pricing strategies, and many types of intellectual property cannot be identified by their structure in the way a social security number can. For example, when a developer uploads Java code to a website, how can a tool determine whether he is contributing to an important open-source project or putting a highly proprietary algorithm on an insecure site so he can work on it at home?

For these reasons, 90 percent of alerts from newly installed DLP tools can be false positives. Obviously, that is unsustainable. Cybersecurity analysts need to engage in a constant process of understanding business processes, analyzing alerts, and tuning business rules, so they can identify truly errant behavior and support management teams as they put consequences in place for employees who consistently choose to ignore their responsibilities in helping protect corporate information.

Hijacked Insiders

External attackers will often use an employee’s credentials without the person even realizing it. In fact, about 80 percent of breaches involve stolen credentials at some point in the attack. As the database administrator mentioned in Chapter 4 found out, clicking on the wrong link can install a keystroke logger that captures an employee’s account information and passwords for any number of important systems.

Changing employee mind-sets and behaviors is again critical, but not sufficient, in addressing the risk of hijacked credentials. More than 30 percent of employees commonly fail spear-phishing tests, so companies have to accept that users will sometimes create opportunities for malware to install itself on their devices. Companies cannot count on even the latest antimalware tool to keep creative and innovative attackers off the corporate network. They must use active defense techniques to identify, investigate, and respond to anomalous activity that may indicate hijacked credentials.

Close analysis of identity and access management (I&AM) data can be especially powerful here. For example, a key employee’s account may begin accessing sensitive information from a new IP range, suggesting a potential hijack of her credentials. However, that could also mean she is traveling on business or has gone on vacation. If the employee’s credentials are used hundreds of miles apart or if her account stays active for 10 hours on a day that HR systems say she is on vacation, that may all but confirm that attackers have hijacked her credentials.

Malevolent Insiders

An employee might be compromised by external attackers through either bribery or coercion. He might decide he finds his employer’s business practices to be repugnant and believes they should be exposed. Most commonly, he might be contemplating an offer from a competitor and want to bring customer lists or pricing information with him to get off to a fast start in the new job.

Traditional mechanisms that influence mind-sets and behaviors will be less effective with a malevolent insider, making active defense especially important. Companies need to develop the analytics to identify current and even future malevolent insider behavior. For example, if an employee has been accessing documents that are “out of profile” for him and has generated two hits from the DLP tool in the previous month, that could be an indication of a potential cybersecurity risk. Depending on the precise nature of the analysis, the cybersecurity team could recommend increased surveillance, a friendly visit from his manager about the importance of protecting sensitive information, reduced access rights, or, in extreme circumstances, revoking all privileges and considering his position.

Deep insight into the employee base is important here. Cybersecurity analysts’ task will be more manageable if they can identify segments of employees who have access to sensitive information (e.g., researchers working on critical R&D projects) and focus their analysis on those groups. Likewise, if cybersecurity analysts can partner with HR to identify which employees with access to sensitive data are most likely to leave, they can further focus their surveillance, reducing the risks that valuable IP follows an employee to a competitor.

Engage the Adversary on the Organization’s Network

Cyber-attackers have the advantage of the first move, and a wide array of tools and techniques at their disposal to hide their entry and activities. This makes it extremely challenging to identify malicious actions within the network. To meet this challenge, organizations need to adopt two parallel approaches. The first is to develop hunting teams to identify attackers’ actions wherever they occur on the network; the second is to actively manage those attackers rather than just kick them out.

Hunting for Adversaries on the Network

The concept of internal hunting teams is emerging as best practice and is trickling down to midsized and small organizations who recognize the benefits of taking a more active approach to cybersecurity.

Dedicated and experienced cybersecurity operators constantly scour the network to try and spot the attackers they have identified in the intelligence phase of the campaign. The operators need to have a “seek and pursue” mind-set and unearth attacker footholds wherever they may appear on the network.

To hunt successfully, operators have to identify and analyze internal use patterns, access logs, employee behavior and suspicious code. Cybersecurity operators are grappling with big data, just as their counterparts in the business are, but theirs is a world in which every network access request generates a data point, every user keystroke is on record, and every system operation a log. This is big data on steroids. The operators must find ways to make sense of the millions of digital interactions that occur on a network every day, and spot the anomalies from which the team can start hunting down the attackers.

Too few organizations regularly scan applications to detect these anomalies, which can include malicious code or unauthorized access requests. Application development teams may scan code at the development stage—although, as we saw earlier in the book, even this is far from universal—but ongoing scans of live systems are seldom conducted on a regular basis. Cybersecurity teams should also carry out vulnerability assessments (and, ideally, penetration testing to simulate external and internal threats).

To avoid being overwhelmed by data and to make effective decisions, hunting teams need to be able to provide context for all this data. This context comes from three main sources:

  1. Integrated threat intelligence and vulnerability analysis. This allows hunting teams to develop hypotheses on what the most dangerous intrusion strategies could be. These strategies are often called the cyber “kill chain,” a term coined by Lockheed Martin.8 Each attacker has a standard approach for entering a network, which hunting teams can identify as they begin to see TTPs repeat themselves on the network.
  2. An understanding of what constitutes abnormal. To develop a picture of “normal,” companies can create profiles for various job categories that detail what these employees’ computer use should look like, and then refine those profiles for certain individuals using machine learning. Compiling a list of “abnormal” (and generally unacceptable) activities, such as massive downloads or printing of sensitive information, will be a continuous process.
  3. An enhanced SOC. Hunting teams benefit enormously when the SOC is advanced enough to filter out as many false alarms as possible. This allows the teams to focus on actions that are likely to be decisive in meeting and stopping an attacker.

With this knowledge in hand, teams can move rapidly from “seek” to “pursue.”

Manage Adversaries on the Network

Once hunting teams identify an attacker on the network, they must decide. Do they evict her and block her route of access, or do they actively manage her within the network. The kneejerk response when an attacker is found is to expel them as fast as possible. However, while this may be a satisfying reaction, and one that is easier to report to management, it is often not the smartest approach.

Removing an attacker has two outcomes. First, the attacker knows that this attempt has failed and therefore will stop wasting time on it. Second, the attacker knows her TTPs have not worked and will adapt them to try again. Neither of these adds to the organization’s intelligence about the attacker. In addition, the attacker may have used only one or two tools to gain access and have been saving the most effective tools for later in the kill chain. Suddenly, kicking the attacker out means that the defender does not know what those tools might be and therefore cannot defend against them in the future.

Companies with the right active defense processes in place should therefore resist the urge to expel the attacker. It can, counterintuitively, be more valuable to keep them inside the network. At the most basic level, organizations can deploy honeypots and tar pits with the hope of occupying adversaries and buying time. These static techniques are increasingly prevalent and help at least until the adversary realizes he is caught.

A honeypot is a collection of computers configured to attract hackers and their software (e.g., malware, including botnets), thereby enabling the company to learn their techniques without putting any critical assets at risk. A well-designed honeypot contains nothing of value but looks as if it does. Once it is infected or attacked, the honeypot administrator can log and monitor the actions of the hacker from within a safe controlled environment. This includes assessing the attacker’s methods for scanning, gaining access, retaining access, exploiting the target, and, most importantly, covering their tracks as they depart the machines and networks when they have—or think they have—what they want. Such information can then be used to develop countermeasures to protect the real networks. The benefits of honeypots can be enormous relative to their cost, as long as they are used properly (e.g., not appearing fake or operationally irrelevant).9 Some organizations go so far as to deploy multiple networks of honeypots to continuously monitor and gather intelligence on active threats to their most critical assets.

A tar pit is used to slow down network scans or known malicious actors or code. Tar pits are a collection of network services or servers designed to reduce or deny known or identified malicious network traffic. Normal network traffic has a minor, but expected, delay, and tar pits increase this delay beyond expected thresholds to slow down the traffic for analysis, deterrence or denial. These are less common and require an understating of the network-associated information of known attacks—requiring significant input from the threat intelligence team.

Tar pits are an example of a static approach to engaging adversaries, but the next generation of cybersecurity defenders are dynamically engaging adversaries by using deceptive sandboxes. A deceptive sandbox is a parallel network environment that looks to an attacker exactly like the organization’s real network except that it is completely walled off from the real network and there is nothing of value that can be stolen or compromised. A deceptive sandbox is an evolution of the honeypot concept; however, where honeypots are designed as a passive measure to fool attackers into attacking a fake target as a distraction, deceptive sandboxes are designed for the cybersecurity team to actively engage attackers to both occupy them and develop additional intelligence.

Once an attacker is discovered on the actual network, operators guide her into the sandbox. She is then free to deploy the rest of her arsenal of tools, thinking she has not been detected. At this point, the malicious code begins “calling home” for instructions, identifying remote servers that are associated with advanced malware—again, with the assumption that the exploit has not been detected. However, in reality, the sandbox management team is observing the additional tool kits she is using and increasing the intelligence they have on her TTPs and improving their ability to attribute future attacks to her.

One company that employs this technique was able to manage more than 30 attacker campaigns simultaneously, with each hacker thinking he or she was operating undetected and was stealing highly sensitive data. The truth was they were being continuously monitored and were stealing nothing but gibberish that the company planted. The company managed one hacker for five months—that was five months where he was not able to do any damage to the company. It also identified an additional 19 tools beyond that which gave him his initial entry, which it can now defend against on its real network. Had they evicted the attacker when they discovered his first tool, they would have lost all of this advantage—a significant opportunity cost!

Partner to Mitigate External Threats

Although retaliating against attackers by hacking back is risky and often illegal, some companies are partnering with third parties, including security agencies, law enforcement, and the civil courts to increase their defensive capabilities.

Many private-sector companies linked to critical national infrastructure (e.g., aerospace and defense firms, utilities, R&D organizations) are working to improve their relationships with national security organizations who may be able to take action against adversaries. For example, a voluntary effort allows the federal government to provide sensitive and often classified information regarding threats to U.S. Department of Defense (DoD) data that resides on or passes through eligible company networks. The program also allows these companies to take advantage of many elements of the DoD’s perimeter defense system, which includes classified threat signatures. Various government departments and agencies have, or are developing, similar efforts to share threat intelligence and protective systems.

Other large companies are actively partnering with national security organizations not just to improve information sharing, but also to deny attackers safe havens. Most large banks, for example, have employees who are solely responsible for engaging with national security agencies with the expectation that the bank can provide information that would allow government agencies to exercise the level of offensive cybersecurity measures that are not available to private-sector organizations.

Facebook worked with local law enforcement agencies to dismantle a malware distribution organization called Lecpetex. Having identified malware that was taking over user accounts, Facebook provided the police with specific information on the adversaries, which led to the servers that propagated the malware being taken offline.10

Other organizations have used civil litigation to extend their active cybersecurity defenses. In one of the best-known examples, Microsoft won a court order in 2012 granting it control over an Internet domain that sold pirated versions of Windows PCs that came preloaded with a strain of malware called Nitol that let attackers control the systems from afar in order to engage in theft, fraud, and other malevolent activities. Microsoft was then able to neutralize the botnet’s activities at the source.11

● ● ●

All the investments companies make in controls have to be backed up by a robust cybersecurity operational model. Although a well-functioning SOC is critical, the passive defense that traditional SOCs provide cannot rise to the challenge provided by increasingly determined and innovative attackers.

Protecting information assets requires not only basic SOC capabilities but also active defenses that detect, deceive, deter, and manage external and internal attackers. Many companies are still building or refining their SOCs—making sure the basic tool kit of firewalls, IDS, antimalware systems, SIEM platforms, and the cybersecurity analysts to interact with them are all in place. Active defense requires tighter integration of these basic tools, rich analytic platforms, and sophisticated cybersecurity analysts to derive insights and recommend actions.

Implementing an active defense strategy will require significant changes to companies’ security management processes and organizational setup. At a minimum, it needs the traditional SOC to integrate with the threat intelligence team to shorten the security decision-making cycle and allow real-time operations against attackers. This may require restructuring of outsourced SOC contracts and process flows. In a more integrated approach, the cybersecurity function may choose to align itself around specific threat campaigns with a supporting operational structure to make decisions and implement changes rapidly.

Given the scarcity and expense of experienced cybersecurity talent, it is essential to automate as many processes as possible so that these people can apply their expertise to the highest-priority issues. Automating the basic triage function, for example, allows cybersecurity professionals to spend their time in the much higher value tasks of hunting and engaging adversaries.

A move to active defense also requires a change of mind-set—one that moves away from detecting and blocking attackers to hunting and managing them. This approach also requires modest changes to the IT infrastructure to allow hunting teams to search for adversaries across network nodes and permit operations teams to manage honeypots, tar pits, and deceptive sandboxes within the network.

While much of the hardest work in building active defenses occurs within cybersecurity, other parts of the organization will have to contribute as well. In particular, IT managers will need to generate the data sets required for sophisticated analytics, and HR managers have to help strike the right balance between protecting employee privacy and identifying activity that indicates an insider threat.

Given all that has to be done, some companies will be tempted to walk before they can run—tempted to get the basic SOC capabilities in place before they think about active defense. Unfortunately, the attackers are already running and will not wait for the defenders to catch up.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset