Chapter 4

Determining and Reducing the PCI Scope

Information in this chapter:

• The Basics of PCI DSS Scoping

• The “Gotchas” of PCI Scope

• Scope Reduction Tips

• Planning Your PCI Project

• Case Study

Scoping your PCI environment is one of the most critical things you must get right in your quest to comply with this daunting standard. So many companies have cost themselves thousands and even millions of dollars by over- or under-scoping their environments and applying controls to the wrong subset. It also seems like the easiest way to get into a heated debate around PCI DSS is to find something wrong with a peer’s scoping process or end result. A Special Interest Group (SIG) was put together on this and while ultimately didn’t come out with one special report like other SIGs did, four different documents came out of that group’s body of work, all related to scoping. If you have been watching the flurry of documents released this year, you might remember the EMV, Tokenization, Roadmap for Encryption, and Point to Point Encryption guidance documents—all of which contain content produced by the scoping SIG. Throughout this chapter we will talk basics, get through some of the “gotchas,” give you some tools on planning your project, talk scope reduction, and provide a couple of case studies.

The Basics of PCI DSS Scoping

If you look at scoping on the surface, it simply can’t be as hard as people make it out to be. If your environment contains Primary Account Numbers (PANs) either in storage or flowing through it, some part of your network must comply with PCI DSS.

Simple, right? On the surface, everything looks simple.

The majority of the discussions around scope typically end up argumentative because one person is interpreting the standard more leniently than another. Most of the discussions the authors have witnessed, or been thrust into the middle of, on scoping start because one party didn’t want to comply with the standard in part or at all. We’ve learned through the years that denial is a very powerful human defense mechanism. It’s easy to ignore the requirements, or come up with arguments to why the rules shouldn’t apply to you. It’s not easy to do things the right way. Reducing the scope and making business decisions about PCI DSS becomes easier when you define your scope properly from the start. It’s probably better to have the scope exclusion discussion about some part of your network if you automatically include everything in scope in the beginning. Then it really comes down to a strong case on why certain components should be excluded over others from scope.

Per the PCI DSS the scope is defined as: any network component, server, or application that is included in or connected to the cardholder data environment, including any virtualization components such as virtual machines, virtual switches/routers, virtual appliances, virtual applications/desktops, and hypervisors. The definition isn’t only about technology, so don’t stop there. You also have to consider the people, processes and technology that store, process or transmit cardholder data or sensitive authentication data. The Council further explains: network components include but are not limited to firewalls, switches, routers, wireless access points, network appliances, and other security appliances. Server types include, but are not limited to the following: web, application, database, authentication, mail, proxy, network time protocol (NTP), and domain name server (DNS). Applications include all purchased and custom applications, including internal and external (e.g. Internet) applications.

Scoping guidance like what you see above starts on page 10 of the PCI DSS, and it is not meant to be an exhaustive list. You can expect other strange scenarios to present themselves while going through this exercise. The authors have gone through this process with many customers, and it’s amazing how many times we’ve both said, “Wow… never saw that one before.” You will run into many scenarios like that while involved with PCI DSS, and you are better off assuming it is in scope first, and then look for ways you could possibly exclude it.

Hopefully you aren’t terrified yet, but the wheels are turning. Your scope includes any place where cardholder information is present at any given time. The duration of time in which it is present is irrelevant, as is the complexity of an attack required to capture that information during a compromise. Don’t even start with those discussions yet as it will take valuable time away from defining the size of the problem.

Smaller businesses have some advantages over larger businesses in that their processes around cardholder data tend to be simpler, and if it is a relatively young small business, more electronic and potentially already outsourced. Small businesses struggle with advanced IT concepts because they typically have neither the budget nor the staff to tackle them. On the other side of the fence, larger businesses typically have many of the IT processes in place to protect cardholder data, but they have no idea where it is. Sure, there are areas where they can say it exists with some assurance they are correct, but for the most part they cannot exclude large portions of their environment because they have no idea if these “non-critical” areas have cardholder data.

Cardholder data is more mobile than people are willing to accept. For example, in an environment where IT people are diagnosing and fixing equipment, sensitive data routinely travels beyond its intended borders. Point Of Sale (POS) technicians debugging faulty terminals will end up with cardholder data on their laptops. Is it securely erased? What happens when that laptop re-joins the corporate network? Is there an automatic backup process that will further proliferate this cardholder data into systems in which it is not supposed to exist?

We’re dragging you through this field of broken glass to hopefully help you understand that determining the scope the right way is painful and will probably require some kind of tool to rescue you. You won’t be able to do this entirely on your own with sheer manpower. You need a way to proactively discover and map cardholder data in your environment. You may not need to go the full Data Loss Prevention (DLP) route for your environment, but you will definitely need some amalgamation of tools to help you wrangle this problem.

Tools

There are both free and commercially available solutions for finding certain types of data and controlling their destiny. Here are a few examples:

Spider from Cornell Labs (http://www2.cit.cornell.edu/security/tools/). This particular tool is available, with source code, for Windows, Unix, and OS X environments and is good for singular scans. It may not scale well if you have hundreds or thousands of servers to check, but for small IT environments it is quite effective. Another alternative to this is opendlp (https://code.google.com/p/opendlp/).

GNU Grep (http://www.gnu.org/s/grep/)—the original data discovery tool. Any technophile that has administered servers professionally knows about this tool and has used it to track down wayward data in various forms. If you wanted to use this tool to triage systems and files that may contain cardholder data, you might combine it with a regular expression like this: grep -rl “((4[[: digit:]]{3})|(5[1–5][[: digit:]]{2})|(6011))[-,[: space:]]?[[:digit:]]{4}[-,[: space:]]?[[:digit:]]{4}[-,[: space:]]?[[:digit:]]{4}|3[4,7][[: digit:]]{13}|3[0,6,8][[: digit:]]{12}” /

The issue with this approach is the large volume of false positives you must sift through. You could pipe the output from the above to script that could reduce the false positives by running each hit through a Luhn check (http://j.mp/IveOgk), and then piping that output to a mail script to dump the contents into an email box for individual follow-up.

Commercial solutions tend to come in two forms, system-based and network-based detection. Companies like Websense, Cisco, and RSA have solutions built into their portfolio. For system based, you will find agent or agentless solutions provided by companies like Symantec, RSA, and Trustwave. The benefit to the commercial solutions tends to come in the form of automatic false positive reduction techniques, scalability, and native file format searching. When determining whether you want to roll your own solution or go with a commercial provider, be sure to include the salary cost of someone maintaining the home-grown version, and run this through the “What happens if Sally wins the lottery?” scenario.

Finally, some systems now have powerful searching capabilities built into their software. In order for them to be effective for your purposes, they must include the ability to do pattern-based searches. Because you want to search for anything that would meet the pattern of a valid credit card number, you have to be able to simplify it into something like the regular expression above.

You will most likely end up with a combination of the above tools to accomplish your goals as none of the above are silver bullets. Each has benefits and limitations.

Once you put your tools in place, you will have to go through the process of determining what is real and what is a false positive. One of the authors had a customer that used a 16 digit routing number to track certain kinds of packages as they moved from location to location. Now, of course, every tracking number didn’t show up as card number to track down, only the ones that started with 36, 37, 4, 5, and 6011, and only one in every 10 or so of those. But when one of their locations was known internally as 60110202, many packages destined for that location set off credit card data alarms. In this case, those 16 digit numbers that passed a Luhn check would not be considered cardholder data, but it certainly is a discussion that will be had with your assessor.

After your first tool run you may feel overwhelmed by the amount of cleanup work you have to do. Don’t be. Think of it as a huge opportunity to shore up your environment and greatly reduce your liability and risk by remediating those areas.

Now, after validating all of your false positives and looking at your final pile of work to handle you will see your true scope. Yep, it really is that dire. This is the reason why you want to go through this process early so you can spend the majority of your time over the next few months by learning how the business operates and reducing the scope of your assessment by destroying data where it doesn’t need to exist.

For the areas where the data absolutely must exist, you have a few options yet to help make this process easier. First, you can choose to outsource your processes to a third party, thus effectively transferring that liability to them. There are exceptions to that liability transfer. Ultimately you want the third party to have responsibility over the merchant account and ID, and only send you wire transfers after the transactions settle. You definitely don’t want to resort to trying to reclaim losses from a compromise by reviewing the damages clauses of your contract with that provider. Is it more expensive per transaction to have someone process payments for you? Yep, but how much of those dollars are you spending on information security related to PCI DSS? Check with your CFO, but we see a significant trend where companies are pushing processes like this into their operating expenses and calling it a cost of doing business (which it is). Unless you are going to invest in a payment processing mechanism to generate revenue by processing other companies’ payments, outsourcing should be a serious consideration for your business.

If this is not an option—which we argue in most cases is in fact the best option—you will need to spend some time building security controls around the areas where you do have cardholder data. Don’t try to bring your entire network into compliance with PCI DSS. You most likely won’t succeed, and it will unnecessarily cost your company thousands or millions. Instead, focus on segmentation, data centralization, and strong access controls to access the raw card data.

The “Gotchas” of PCI Scope

As we discussed earlier, most of the contentious discussions around PCI DSS are from people who are trying to find ways around the requirements, mostly so they don’t have to make any changes to the environment for which they are responsible. We find it ironic that if people would put the same effort into complying with PCI DSS that they did into fighting it, we would see much higher compliance rates and more examples of companies really doing it right. Unfortunately, both are not nearly as prevalent as they need to be regardless of what you might hear from a payment brand, acquirer, or even a merchant. If you are a techie person reading this book, you probably will walk away realizing there are a few things your company does that skirt the line of compliance or maybe even blatantly destroy it. Remember, denial is a powerful human defense mechanism. Don’t let it be yours.

With that in mind, let’s walk through a few examples of mistakes people have made in determining their PCI DSS scope, in both directions. There are very few examples of over scoping a PCI DSS environment, but we’ll review one now.

One of the authors was brought into a situation by the finance group of a merchant to help them determine the next steps in their PCI compliance process. The internal audit team was taking the first crack at building a case for PCI compliance, and had grossly over scoped the environment. They wanted every electronic device in the company to be included in the scope of PCI DSS and put a massive remediation budget in place, including the creation of a new department and a team of 20 heads to whip the company into shape.

The audit team’s rationale for including everything in scope was not too far off the mark, but they missed some simple scope reduction techniques that were much cheaper with a smaller impact to the end systems. The company used a mainframe system to process their cardholder data. Once the data entered the mainframe for processing, it only left to go to the bank over a direct IP connection on a private telco-provided data line. Because the mainframe was not segmented from the network, it was assumed everything should be in scope. Data was entered into the mainframe from a specially crafted payment terminal that encrypted the traffic over the network. Once inside the mainframe, it was encrypted and tokenized, such that the only data used by the company after settlement was token data (dummy information that is tied to a card number, but meaningless without the association to the real card number). What we suggested was to audit the access controls to that data and remove human access entirely. Then create alerts when the access control tables changed to be followed-up on by the audit team. The terminals were firewalled off at the store location such that the only way to access them remotely was to use strong authentication and a VPN connection. With a few other controls, we were able to remove the need for the massive capital expenditure, and instead helped that company boost their security and comply with the standard in a matter of months.

The “gotcha” with that example was a very loose interpretation of the scoping statement from PCI DSS. Yes, it does say “any network component, server, or application that is included in or connected to the cardholder data environment,” but that tends to break down a bit when you ignore the capabilities of the underlying technology driving the environment. We’ve also learned that mainframe environments tend to throw a monkey wrench into the works because few people really understand how they work, and the security implications of running one. Ultimately, there are a few ways to meet the above italics and still reduce the impact that PCI DSS has to your environment.

Note

Mixed Mode is the concept whereby a virtual infrastructure hosts both guests that must comply with PCI DSS and others that are considered out of scope of PCI DSS. If you are considering using this method, be sure the underlying virtualization fabric complies with PCI DSS and the out of scope hosts are sufficiently isolated from the ones that must comply with PCI DSS through access controls and virtual segmentation. Your mileage may vary here, and you should understand both the complexities of the hypervisor and its controls as well as the application or function that is virtual.

Another over scoped example comes to us from interpreting the standard as it relates to virtualized environments. Virtualization as a technology is only going to become more present in our environments, even down to our desktops and mobile devices. The PCI Council released a guidance document from the Virtualization SIG in 2011, and there was both great information and guidance as well as a terrible interpretation of some parts of the standard with editorial comments left in the document for assessors and assesses to argue over for the next couple of years. One author helped a company educate their Qualified Security Assessor (QSA) on what virtualization can do the scope of an environment, and how tackling the “Mixed Mode” problem correctly can help IT departments meet their virtualization targets while keeping data safe and secure. The QSA argued that because the virtual host held both in and out of scope guests, that all guests must comply with PCI DSS regardless of their scope determination. The QSA incorrectly made the leap that a virtual host could not be locked down in the same manner a physical data center could. In fact, virtualized infrastructure can be deployed in a way that makes it more secure than traditional physical deployments, but that’s a topic for another book. What ultimately came out of the discussion was a scope that matched the intent of PCI DSS, and focused on making the hosts and in-scope guests fully compliant with PCI DSS. The QSA was able to complete the assessment to their satisfaction, and even learned a thing or two during the assessment process that will ultimately provide their customers with a better assessment experience.

Now, let’s discuss some under scoped examples. One of the requirements we will discuss in this book is Requirement 2.2.1, “Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server.” Now, this can be interpreted in a number of ways as you might imagine, and this one tends to be a key area where scope can be over or under done. In this case, a company came to one of the authors stating that the in-store server should be considered out of scope because it doesn’t store any cardholder information. That particular machine performed a tremendous amount of back-of-house reporting for each store, and allowed managers to check their company email and do some basic web surfing, perform local anti-virus distributions to the POS systems, participate in the corporate Active Directory system as a tree in the forest, and contact internet systems for local DNS resolution and NTP syncing. There was no segmentation in the store, and the machine did in fact contain one day’s worth of credit card data on it that was pulled from the point of sale controller to assist in that back-of-house reporting.

The first step was to convince the internal groups that the machine was, in fact, in scope for PCI DSS. Not only was it on the same network as the point of sale systems, but it contained a day of cardholder data on it for reporting. The company saw that they did make a mistake and the machine should be in scope. The next argument made was that it didn’t violate the “one function per server” concept because its function was to support the store, and that was the only function it had. “Supporting the store” can be a broad view of a business function, but it is certainly not a single function as intended by this requirement. Once they understood the intent of the requirement, they changed a few things to keep that server functioning as intended but without the scope issues associated with leaving it connected to the same network. It was segmented off in the store, and the daily reporting information was cleaned and pushed from the POS controllers (as opposed to pulled by the in store server) such that no PCI data was included in the dump.

Another classic under-scoping problem is calling a service provider out of scope because you have put contractual language in place between the companies to address compliance from a legal perspective. PCI DSS version 2.0 clarified the need to visit some service providers on some kind of basis to ensure they support your PCI DSS compliance. In the old days, QSAs would often see certain companies listed as service providers for certain functions and just assume they are acting in a compliant manner. Iron Mountain is a classic example of this. Rarely would you see a QSA asking to go visit the Iron Mountain facility where cardholder data might be stored offsite. In fact, some companies would simply call the relationship out of scope of PCI because they had earned such tremendous industry trust and they didn’t want to re-open contract negotiations. This is simply not acceptable if cardholder data is being stored, transmitted, or otherwise processed by that service provider. They are absolutely in-scope and should be evaluated just like any internal group that stores, transmits, or processes PAN data.

Now, one way to potentially remove such a service provider from scope is to send them only encrypted data with no access to any keys (meaning you can’t include the keys on any media you send along). According to PCI DSS FAQ 10359 (http://selfservice.kb.net/article.aspx?article=10359&p=81), “encrypted data may be deemed out of scope if, and only if, it has been validated that the entity that possesses encrypted cardholder data does not have the means to decrypt it.” There are a couple of key concepts to consider here. The delineation is means, not knowledge. So if you are using some kind of obscure crypto technique and include the means to reverse it, you can’t count on the lack of the entity’s knowledge to exclude them from this requirement.

To avoid all the “gotchas” associated with scoping just keep in mind that you must consider anything in scope that is included in, or connected to, the cardholder data environment. The best way to remove systems from the “connected to” clause is to deploy firewalls around the cardholder data environment’s perimeter, separating it from the rest of the enterprise, and eliminating data interchange over the border as much as possible. The more you can do here, the easier your assessment process will go, and the less you will have to rely on QSA interpretation to dictate your fate.

Scope Reduction Tips

Now that you have built your scope and know how serious the problem is, you are no doubt taking a serious look at ways to reduce this scope such that the impact to your organization is minimized. The authors have had many of these discussions over the years as companies facing PCI Compliance for the first (or even second and third) time tend to have sloppy IT environments that focus on availability and data processing over segmentation, data privacy, and security. We love working with retailers because they tend to be some of the most innovative thinkers on running their businesses, and they don’t hesitate to use every single tool in their arsenal to either solve problems or get ahead of the competition. What they lack is the understanding of how their actions impact the larger company’s security and compliance postures. That’s where user education comes in, and why companies facing their annual assessment will typically have new problems that pop up year over year.

The first scope reduction technique has already been briefly discussed, complete outsourcing of your payment environment. One of the authors frequently has a more confrontational discussion with CIOs that starts something like: “What business do you have running a payment processor? You are a merchant. Your core competencies are marketing and supply chain management, not payment processing. So why on earth would you put company resources towards doing it in a half-baked way?”

These conversations aren’t as contentious as they sound as they tend to be delivered with a smile, but the point is valid and it forces executives to have a hard discussion about how their business operates. PCI DSS isn’t going away any time soon, and legislation around personal information is only growing. Companies must focus on what is important to their bottom line, and an investment in building and maintaining a payment processing arm just isn’t as good as it was in the 1980s and 1990s. Back then, we didn’t have PCI DSS, and the interconnectedness of our enterprises was virtually nonexistent when you compare it to today’s IT infrastructures. When CIOs work with CFOs to truly determine the amount of money spent towards maintaining these environments versus paying a point more on each transaction to outsource, they can get a better handle on what complete outsourcing means for their company. Each enterprise is a bit different as is each processing agreement, but you can bet that more than one executive has built additional fees into their business model for the long term as opposed to continually living with compliance costs around PCI DSS.

If you are a small business, there is absolutely no reason to build your own gateway. Small businesses really get the core competency concept: “What do we do? We make the best pizzas around. So why would we invest any money on anything not related to making the best pizzas around?” Cashless payments are a way of life, and one way we conduct cashless business is via credit cards. There are many other methods that are cheaper per transaction, and as these new technologies incubate, every CIO and CFO should look to see if incorporating them makes a significant impact on the bottom line. Until then, outsourcing payment processing is easily one of the best decisions you can make around PCI DSS.

Now, what if you choose not to outsource? There are many options for you to reduce your scope. The first of which is to investigation tokenization. In the purest sense of the word, a token is a replacement value for another piece of data. Meaning, instead of using 4111 1111 1111 1111 for a Visa card number, you would use some other value to represent that card number, and have a way to look up the original number should you need it. The token could be something alpha-numeric, numeric only, or even binary values. Based on the amount of existing data and the design of the applications using it, most tokens tend to take the form of a 16-digit numeric value.

Regardless of the makeup of the token, there should not be any mathematical (or otherwise) relationship between the token value and the original value. The only relationship that should exist between a token and the original value is the index table of numbers you would use to associate a worthless token with a potentially valuable PAN.

Note

A schema for such an index might look like this:

  CREATE TABLE Tokens {

  original_value CHAR(16) PRIMARY KEY,

  token CHAR(16)

  };

This is a rather simplistic view, but tokens don’t need to be complex to be effective. Generating tokens would happen outside of the database layer in this case, but you could build that generation process into the database layer. Making the original value the primary key will prevent two tokens from representing the same original value. It’s been a long time since either author has done database design, so we wouldn’t suggest implementing this directly. This is simply a way to illustrate a point.

Original values should not be able to be reversed or derived from token values. If they are, tokens should be treated like cipher text instead of tokens. When tokens and PANs are cryptographically related it opens the door for cryptanalysis and the potential to reverse the crypto operations.

Another concept is to look at how you process information, and choose a highly centralized and protected model for doing so. One method for doing this would be to centralize all of your data into a single enclave and only provide access to the applications and data through a “window-pane” powered by a virtualization technology like VMWare View or Citrix. Several companies have taken this approach to keep their sensitive data centralized, and put a virtual air-gap between the user and the data.

In these instances, data is tightly controlled in a small environment and all interactions are done through a virtual desktop that acts like an abstraction layer. In most instances, companies will treat this as the true PCI DSS perimeter, so any user accessing the environment will typically use a 2-factor token (or some other form of 2-factor authentication) to meet Requirement 8.3. Any traffic to and from this environment is tightly controlled via firewalls, access control choke points, other access and authorization management tools, and network monitoring including technologies like DLP. PCI DSS does not require DLP directly, however, companies use the tools to ensure their scope stays where they expect it, and to create an early warning system to alert administrators to potential problems before a breach occurs.

Reducing the scope of your PCI environment is a business decision that should be included in every journey to compliance. More often than not, simply removing the data from the environment and making it someone else’s problem can go a long way to minimizing the issues most companies face when complying with PCI DSS. The main goal for minimizing the impact is to use people, process, and technology to contain the spread of cardholder data. The above methods are a few examples, but the general methodology you would go through is:

1. Understand how your business uses credit card information (useful for the Executive Summary section of the Report on Compliance [ROC]).

2. Understand the business and legal requirements for retention of data (Requirement 3.1).

3. Completely map the flow of cardholder data throughout the entire enterprise (this is much more than what is required for Requirement 1.1.2 and should include the business process flows that can map to the technology endpoints).

4. Now that you have the scope of the problem, look for ways to reduce or remove cardholder data by changing business processes and isolating technology segments.

5. Create remediation plans for areas where you cannot remove cardholder data that would include budgetary requirements for maintaining the data, costs for removing the data, soft costs associated with long term management, and a 3–5 years total cost projection.

6. Approach finance teams and business leaders to explain the available options and get buy in from the C-level to execute a plan.

7. Execute the plan.

The authors have used this methodology many times to help companies reduce their compliance costs and affect change in the business to ensure a workable long-term solution.

Planning Your PCI Project

If you are reading and working your project at the same time, you now have a really good idea of how good or bad the problem is, you have a solid list of projects for your company to complete during the journey, and you have executive buy-in to proceed. But what order should you go in? And how do you take a loose grouping of projects and demonstrate measured progress toward compliance? Luckily, the Council has something ready for you to use.

Note

The PCI Council created a tool called the Prioritized Approach for PCI DSS which can be downloaded from the PCI Standards & Documents section of the PCI Security Standards website. There is both a PDF version as well as a spreadsheet version that includes graphs and completion estimates for customization to your organization.

The Prioritized Approach for PCI DSS details a Council-endorsed roadmap for becoming compliant that goes through six key phases defined as:

1. Remove sensitive authentication data and limit data retention. This milestone targets a key area of risk for entities that have been compromised. Remember—if sensitive authentication data and other cardholder data are not stored, the effects of a compromise will be greatly reduced. If you don’t need it, don’t store it.

2. Protect the perimeter, internal, and wireless networks. This milestone targets controls for points of access to most compromises—the network or a wireless access point.

3. Secure payment card applications. This milestone targets controls for applications, application processes, and application servers. Weaknesses in these areas offer easy prey for compromising systems and obtaining access to cardholder data.

4. Monitor and control access to your systems. Controls for this milestone allow you to detect the who, what, when, and how concerning who is accessing your network and cardholder data environment.

5. Protect stored cardholder data. For those organizations that have analyzed their business processes and determined that they must store Primary Account Numbers, Milestone Five targets key protections mechanisms for that stored data.

6. Finalize remaining compliance efforts, and ensure all controls are in place. The intent of Milestone Six is to complete PCI DSS requirements and finalize all remaining related policies, procedures, and processes needed to protect the cardholder data environment.

Each requirement is broken into its various subrequirements and assigned to one of the six phases. For the large number of you reading this that validate compliance via a Self Assessment Questionnaire (SAQ), you will need to do some editing as the tools from the Council are not broken into the various versions of the SAQ. Regardless of your level and validation requirements, you should run through the entire standard at least once to see if you are missing any of the 250+ tests. It’s better to identify and remediate issues now so that as your business grows or the standard changes, you are not caught with a massive remediation bill down the road. Once you complete your initial gap analysis for the requirements you must validate against, you should see how your project list lines up with the six phases. This is where the spreadsheet tool can be a huge help for your project as you can adjust the requirements to fit your projects, even if they don’t exactly line up with the phases as defined by the Council. The closer you leave it to the pre-defined phases the better when talking to your acquirer or processor about your progress to compliance. They will most likely be familiar with this document, and in some geographies like Europe, you may be expected to report compliance to a certain phase to receive certain exceptions. More on this in Chapter 15.

The biggest challenge you will face while remediating PCI DSS issues is during the execution of your project. You will invariably have one or two teams that cannot execute to the original remediation plan due to some unforeseen issue. This is where flexibility, knowledge, and experience really pay off in your organization. If you don’t have resources on-hand to quickly assess and adjust the plans with deep knowledge and experience in PCI DSS, you should consider augmenting your staff with a contract resource. Not all contract resources are alike, and each one should be interviewed like you were going to hire them. It helps if you have been through some formal training on PCI DSS such as the Internal Security Assessor (ISA) program offered through the Council. You will not only be able to handle most of the minor compliance issues yourself, but you will know what kinds of questions to ask a prospective contractor to see if they are a good fit and worth their price tag. Unfortunately, these types of issues that pop up don’t have a basic formula to solve. This is where the real magic happens during your PCI journey.

Case Study

The case studies presented in this section build upon what we have learned so far in this chapter. The first will take you through a company’s quest to fully understand their data sprawl problem and the second through a company looking to reduce PCI DSS scope with business leaders that are fighting change.

The Case of the Leaky Data

Tracey’s Tin Trimmings is just beginning their PCI project with Jeremy at the helm. Jeremy works for a regional retailer with 11 locations specializing the sale and service of high quality aftermarket automotive products. In order to compete with the larger big box retailers, Tracey’s created a highly customized shopping and delivery experience that values customer service and satisfaction above all else. They store extensive information on their customers in their corporate data center, as well as with several third party providers that enable customers to watch repairs and modifications through a browser, deliver customized information to customers about their vehicles, and interactive applications that allow users to scan product codes and chat with a live expert on the integration with their vehicle.

Since Tracey’s was founded on a shoe-string budget four years ago, the majority of the innovative customer interaction systems are cloud-based or delivered by third parties. Part of Tracey’s secret sauce is the ability to run analytics over all of the disparate sources of information to ensure their customers receive timely updates enticing them to spend money. Jeremy is a relatively new employee, now with the company for 18 months, and is in charge of starting the PCI DSS compliance process as credit card volume skyrockets. He first talks to all of the business owners of the various divisions inside of Tracey’s. Jeremy knows that if he doesn’t have a good working knowledge of all of the service providers and data interchange points, he won’t be able to properly scope the PCI environment. As he learns how the business operates, he realizes that like most small companies, the early architects at Tracey’s favored utility over security and privacy, and while customer information is fairly well protected, it has no bounds by which it moves.

Jeremy maps out the business processes describing how things should work inside of Tracey’s systems. He then goes about validating the business process documentation by putting network sniffing technology at key choke points, and working with some of the third party providers to discover the kinds of data that Tracey’s uses as well as has access to. Once Jeremy has that information, he updates the business process documentation to take the real-world happenings into account with the architect’s vision.

Now that Jeremy has a true picture of what is happening inside Tracey’s, he realizes that major adjustments need to be made to the way information is processed to keep the scope manageable. As it stands today, everything is in scope because there is no real separation among the various systems and functions, and a full-scale remediation is neither affordable nor doable from a timetable perspective. He also opens discussions with his service providers to understand their compliance status as well as revisit their contracts to ensure compliance with PCI Requirement 12.8. During this entire process, he limits the scope to two sets of systems, and two third parties. He is able to set up encrypted tunnels between the providers and the systems, and segment those systems from the rest of the network with firewalls. With the smaller scope defined, Jeremy now looks to perform his gap analysis, plan his compliance project over the next 12 months, and investigate tools to ensure he can automatically enforce the scope definition.

The Case of the Entrenched Enterprise

Jason’s Jump-Up, a large fitness chain targeting family health and nutrition, has grown by acquisition by merging with several regional gyms with similar cultures and customers. Jason started his business 15 years ago in Atlanta, and is now a major shareholder in the larger enterprise that spans from Texas to Virginia. The board hired a new CEO, COO, and CFO to handle the larger enterprise as it plans to IPO in 24 months to raise capital for a westward expansion. For the most part, the management staff from each of the acquired companies stayed in place and each is run as a separate division with its own Profit & Loss (P&L) accountability. As the new executive management comes together, they realize that a massive overhaul in the corporate structure is necessary to sustain a larger organization, and Jason is charged with streamlining business processes across the enterprise and getting buy-in from all the divisional managers.

Jason knows how his original 18 locations operated, but is unfamiliar with all the inner workings of the other various companies that merged into the fold. As he visits with each manager, he learns that not only are things very different from division to division, but the managers are quite set in their ways and have an aversion to change.

After Jason meets with everyone, he puts together several strategies for moving forward, one of which includes partnering with a third party payment processor to manage all of the monthly membership dues and daily incidental fees that members incur by using certain amenities inside the clubs. The vast majority of the divisional managers fight the outsourcing proposal because they realize that the added fees will ultimately be charged to their divisional P&L, and they will receive lower revenues.

In order for Jason to sell his plan, he needed to get creative. He knew that processing payments was not something he wanted the company to focus on as it took away from the core competency of health and fitness. On his initial run through the divisions, he brought a few consultants with him to analyze business processes, financials, and review the payment systems for PCI compliance. Since he had detailed information on the gaps in each environment, he was able to work with a consultant to get an approximate remediation cost and ongoing maintenance costs once the gaps were remediated. Each division’s cost projections over the next five years were well into seven figures, with nearly 60% taken in the first two years. Jason went to each of the divisional managers and showed them the cost projections. He informed each of the divisions that these costs would be hitting their P&L, or they could opt for the much lower operating costs of outsourcing per his original plan. Jason knew that the outsourcing plan made the most financial sense for the larger company in the long term, but he put the decision to each manager to make. Overwhelmingly, the managers opted to go with the outsourced payments model, and ultimately remove some IT and operating expense from their P&L as they spun down systems responsible for processing payments.

The key to Jason’s success was not only doing all of the diligence required to paint the picture accurately, but by providing complete alternatives with future cost projections while involving the managers in the decision. Each manager knew that Jason would be taking the overall analysis to the board, and that unprofitable divisions would not fall into good favor.

Summary

Determining the correct scope for your PCI environment is the single most critical thing you must get right while planning anything related to PCI DSS in your company. While the officially chartered SIG didn’t create the panacea for all scoping ills, it did produce quite a bit of content that is useful in both determining the scope and providing guidance when using more advanced technologies like EMV. The Council provided us with the Prioritized Approach for PCI DSS which can be tremendously useful in planning and executing our PCI related projects. But no amount of tools will compensate for a lack of support from the C-suite. If they don’t believe that compliance is important, and reducing their exposure to compliance is equally important, you will learn what it’s like to push a large rock up a very steep hill.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset