Chapter 4. Infrastructure Security and Controls

Terms you need to understand

Image Antivirus

Image Antispam

Image Pop-up blockers

Image Virtualization technology

Image Security groups

Image Access control lists

Image Group policies

Image Logical tokens

Image Probability

Image Risk

Techniques you need to master

Image Differentiate between the different types of security applications that can be applied on the internal network.

Image Apply the appropriate network tools to facilitate network security.

Image Implement the appropriate security groups, roles, rights and permissions.

Image Define logical internal access control methods.

Image Explain how to calculate risk and return on investment.

In the preceding chapter, you learned about the basic components of the network infrastructure, its vulnerabilities, and some methods to mitigate exploitation. Network security goes beyond just knowing the risks and vulnerabilities. To mitigate threats and risks, you must also know how to assess your environment and protect it. This chapter discusses how to implement security applications to help mitigate risk and how to use security groups, roles, rights and permissions in accordance with industry best practices. In addition, this chapter covers how you can use physical security as a tool to mitigate threats and protect computers and network infrastructure.

Implementing Security Applications

When dealing with security issues, two areas need to be covered. The first one addresses the physical components such as hardware, network components, and physical security designs. The second one deals with using protocols and software to protect data. The latter covers software that can help protect the internal network components, such as personal firewalls and antivirus software.

Personal Software Firewalls

Desktops and laptops need to have layered security just like servers. However, many organizations stop this protection at antivirus software, which in today’s environment may not be enough to ward off malware, phishing, and rootkits. One of the most common ways to protect desktops and laptops is to use a personal firewall. Firewalls can consist of hardware, software, or a combination of both. This discussion focuses on software firewalls that you can implement into the user environment.

The potential for hackers to access data through a user’s machine has grown substantially as hacking tools have become more sophisticated and difficult to detect. This is especially true for the telecommuter’s machine. Always-connected computers, typical with cable modems, give attackers plenty of time to discover and exploit system vulnerabilities. Many software firewalls are available, and most operating systems now come with them readily available. You can choose to use the OS vendor firewall or to install a separate one.

Like most other solutions, firewalls have strengths and weaknesses. By design, firewalls close off systems to scanning and entry by blocking ports or nontrusted services and applications. However, they require proper configuration. Typically, the first time a program tries to access the Internet, a software firewall asks whether it should permit the communication. Some users might find this annoying and disable the firewall or not understand what the software is asking and allow all communications. Another caveat is that some firewalls monitor only for incoming connections and not outgoing. Remember that even a good firewall cannot protect you if you do not exercise a proper level of caution and think before you download. No system is foolproof, but software firewalls installed on user systems can help make the computing environment safer.

Exam Alert

Monitoring outbound connections is important, so that you protect against malware that “phones home.” Without this type of protection, the environment is not properly protected.

Antivirus

Another necessary software program for protecting the user environment is antivirus software. Antivirus software is used to scan for malicious code in email and downloaded files. Antivirus software actually works backward. Virus writers release a virus, it is reported, and then antivirus vendors reverse-engineer the code to find a solution. After the virus has been analyzed, the antivirus software can look for specific characteristic of the virus. Remember that for a virus to be successful, it must replicate its code.

The most common method used in an antivirus program is scanning. Scanning searches files in memory, the boot sector, and on the hard disk for identifiable virus code. Scanning identifies virus code based on a unique string of characters known as a signature. When the virus software detects the signature, it isolates the file. Then, depending on the software settings, the antivirus software quarantines it or permanently deletes it. Interception software detects viruslike behavior and then pops up a warning to the user. However, because the software looks only at file changes, it might also detect legitimate files.

In the past, antivirus engines used a heuristic engine for detecting virus structures or integrity checking as a method of file comparison. A false positive occurs when the software classifies an action as a possible intrusion when it is actually a nonthreatening action. Chapter 7, “Intrusion Detection and Security Baselines,” explains this concept in more detail.

Exam Alert

Heuristic scanning looks for instructions or commands that are not typically found in application programs. The issue with these methods is that they are susceptible to false positives and cannot identify new viruses until the database is updated.

Antivirus software vendors update their virus signatures on a regular basis. Most antivirus software connects to the vendor website to check the software database for updates and then automatically downloads and installs them as they become available. Besides setting your antivirus software for automatic updates, you should set the machine to automatically scan at least once a week.

In the event a machine does become infected, the first step is to remove it from the network so that it cannot damage other machines. The best defense against virus infection is user education. Most antivirus software used today is fairly effective, but only if it’s kept updated and the user practices safe computing habits such as not opening unfamiliar documents or programs. Despite all this, antivirus software cannot protect against brand new viruses, and often users do not take the necessary precautions. Users sometimes disable antivirus software because it may interfere with programs that are currently installed on the machine. Be sure to guard against this type of incident.

Antispam

Sophos Research reports that 92.3 percent of all email was spam during the first quarter of 2008. Spam is defined several ways, the most common being unwanted commercial email. Although spam may merely seem to be an annoyance, it uses bandwidth, takes up storage space, and reduces productivity. Antispam software can add another layer of defense to the infrastructure.

You can install antispam software in various ways. The most common methods are at the email server or the email client. When the software and updates are installed on a central server and pushed out to the client machines, this is called a centralized solution. When the updates are left up to the individual users, you have a decentralized environment. As with the previous discussions in this section, this discussion focuses on the client-side implementation. The main component of antispam software is heuristic filtering. Heuristic filtering has a predefined rule set that compares incoming email information against the rule set. The software reads the contents of each message and compares the words in that message against the words in typical spam messages. Each rule assigns a numeric score to the probability of the message being spam. This score is then used to determine whether the message meets the acceptable level set. If many of the same words from the rule set are in the message being examined, it’s marked as spam. Specific spam filtering levels can be set on the user’s email account. If the setting is high, more spam will be filtered, but it may also filter legitimate email as spam, thus causing false positives.

Additional settings can be used in the rule set. In general, an email address added to the approved list is never considered spam. This is also known as a white list. Using white lists allows more flexibility in the type of email you receive. For example, putting the addresses of your relatives or friends in your white list allows you to receive any type of content from them. An email address added to the blocked list is always considered spam. This is also known as a black list. Other factors may affect the ability to receive email on white lists. For example, if attachments are not allowed and the email has an attachment, the message may get filtered even if the address is on the approved list.

Pop-Up Blockers

A common method for Internet advertising is using a window that pops up in the middle of your screen to display a message when you click a link or button on a Website. Although some pop-ups are helpful, many are an annoyance, and others can contain inappropriate content or entice the user to download malware.

There are several variations of pop-up windows. A pop-under ad opens a new browser window under the active window. These types of ads often are not seen until the current window is closed. Hover ads are Dynamic Hypertext Markup Language (DHTML) pop-ups. They are essentially “floating pop-ups” in a web page.

Most online toolbars come with pop-up blockers, various downloadable pop-up blocking software is available, and the browsers included with some operating systems such as Windows XP can block pop-up blockers. Pop-up blockers, just like many of the other defensive software discussed so far, have settings that you can adjust. You might want to try setting the software to medium so that it will block most automatic pop-ups but still allow functionality. Keep in mind that you can adjust the settings on pop-up blockers to meet the organizational policy or to best protect the user environment.

Several caveats apply to using pop-up blockers. There are helpful pop-ups. Some web-based programmed application installers use a pop-up to install software. If all pop-ups are blocked, the user may not be able to install applications or programs. Field help for fill-in forms is often in the form of a pop-up. Some pop-up blockers may delete the information already entered by reloading the page, causing users unnecessary grief. You can also circumvent pop-up blockers in various ways. Most pop-up blockers block only the JavaScript; therefore, technologies such as Flash bypass the pop-up blocker. On many Internet browsers, holding down the Ctrl key while clicking a link will allow it to bypass the pop-up filter.

Virtualization Technology

With more emphasis being placed on going green and power becoming more expensive, virtualization offers cost benefits by decreasing the number of physical machines required within an environment. This applies to both servers and desktops. On the client side, the ability to run multiple operating environments allows a machine to support applications and services for an operating environment other than the primary environment. Currently, many implementations of virtual environments are available to run on just about everything from servers and routers to USB thumb drives.

For virtualization to occur, a hypervisor is used. The hypervisor controls how access to a computer’s processors and memory is shared. A hypervisor or virtual machine monitor (VMM) is a virtualization platform that provides more than one operating system to run on a host computer at the same time. A Type 1 native or bare-metal hypervisor is software that runs directly on a hardware platform. The guest operating system runs at the second level above the hardware. This technique allows full guest systems to be run in a relatively efficient manner. The guest OS is not aware it is being virtualized and requires no modification. A Type 2 or hosted hypervisor is software that runs within an operating system environment, and the guest operating system runs at the third level above the hardware. The hypervisor runs as an application or shell on another already running operating system.

Hardware vendors are rapidly embracing virtualization and developing new features to simplify virtualization techniques. Virtual environments can be used to improve security by allowing unstable applications to be used in an isolated environment and providing better disaster recovery solutions. Virtual environments are used for cost-cutting measures as well. One well-equipped server can host several virtual servers. This reduces the need for power and equipment. Forensic analysts often use virtual environments to examine environments that may contain malware or as a method of viewing the environment the same way the criminal did. Preconfigured virtual appliances are available for operating systems, networking components, and applications.

The use of virtualization is growing in the individual-use market and in the corporate environment. Users can now load a virtualized environment using a portable USB storage device or network-attached storage, leaving the original system intact. These advances give the organization more control over the environment because virtual machines can be pushed out to the desktops or given to mobile workers. However, the security of the host machine and the virtual machine must be considered, as must the investigative issues in using such environments.

The security concerns of virtual environments begin with the guest operating system. If a virtual machine is compromised, an intruder can gain control of all the guest operating systems. In addition, because hardware is shared, most virtual machines run with very high privileges. This can allow an intruder who compromises a virtual machine to compromise the host machine, too. Vulnerabilities also come into play. For example, a few years ago, VMware’s NAT service had a buffer-overflow vulnerability that allowed remote attackers to execute malicious code by exploiting the virtual machine itself. Virtual machine environments need to be patched just like host environments and are susceptible to the same issues as a host operating system. You should be cognizant of share files among guest and host operating systems.

Exam Alert

Virtualized environments, if compromised, can provide access to not only the network, but also any virtualization infrastructure. This puts a lot of data at risk.

Security policy should address virtual environments. Any technology software without a defined business need should not be allowed on systems. This applies to all systems, including virtual environments. To secure a virtualized environment, machines should be segmented by the sensitivity of the information they contain. A policy should be in place that specifies that hardware is not shared for test environments and sensitive data. Another way to secure a virtualized environment is to use standard locked-down images. Other areas that present issues for a virtualized environment and need special consideration are deploying financial applications on virtualized shared hosting and secure storage on storage-area network (SAN) technologies.

Applying Network Tools to Facilitate Security

Chapter 3, “Infrastructure Basics,” described the design elements and components such as firewalls, VLANS, and perimeter network boundaries that distinguish between private networks, intranets, and the Internet. Network compromises now carry an increased threat with the spread of botnets, which were discussed in Chapter 1, “System Threats and Risks.” This means an entire corporate network can be used for spam relay, phishing systems and launching distributed denial-of-service (DDoS) attacks. It is important to not only know how to use the proper elements in design but also how to position and apply these tools to facilitate security. This section discusses just that.

Firewalls

In any environment, threats to network integrity come from both external and internal sources. The primary function of a firewall is to mitigate threats by monitoring all traffic entering or leaving a network. As you learned in Chapter 3, there are three basic types of firewalls:

Packet filteringBest suited for simple networks or used to protect a network that is used mainly for Internet access. The placement of a packet-filtering firewall is between the Internet and the protected network. It filters all traffic entering or leaving the network.

Proxy serviceAllows organizations to offer services securely to Internet users. All servers hosting public services are placed in the demilitarized zone (DMZ) with the proxy firewall between the DMZ and the internal network.

Stateful inspectionSuited for main perimeter security. Stateful inspection firewalls can thwart port scanning by closing off ports until a connection to the specific port is requested.

Although Chapter 3 discussed the types and uses of various firewall technologies, it did not discuss the placement. Knowing the difference between these types of firewalls and the proper placement of each is important to securing the infrastructure. As you read through this section, you might need to review the descriptions of each firewall type in the preceding chapter.

The main objective for the placement of firewalls is to allow only traffic that the organization deems necessary and provide notification of suspicious behavior. Most organizations deploy, at a minimum, two firewalls. The first firewall is placed in front of the DMZ to allow requests destined for servers in the DMZ or to route requests to an authentication proxy. The second firewall is placed between the DMZ and the internal network to allow outbound requests. All initial necessary connections are located on the DMZ machines. For example, a RADIUS server may be running in the DMZ for improved performance and enhanced security, even though its database resides inside the company intranet. Most organizations have many firewalls with the level of protection stronger nearest to the outside edge of the environment. Figure 4.1 shows an example.

Figure 4.1. A network with two firewalls.

Image

Exam Alert

Watch for scenarios that ask you to select the proper firewall placement based on organizational need.

When deploying multiple firewalls, you might experience network latency. If you do, check the placement of the firewalls and possibly reconsider the topology to be sure you get the most out of the firewalls. Another factor to think about is the use of a storage-area network (SAN) or network-area storage (NAS) behind a firewall. Because most storage environments span multiple networks, this creates a virtual bridge that can counteract a firewall, providing a channel into the storage environment if a system is compromised in the DMZ.

Proxy Servers

Proxy servers are used for a variety of reasons, so the placement will depend on the usage. Proxy servers can be placed between the private network and the Internet for Internet connectivity or internally for Web content caching. If the organization is using the proxy server for both Internet connectivity and Web content caching, the proxy server should be placed between the internal network and the Internet, with access for users who are requesting the Web content. In some proxy server designs, the proxy server is placed in parallel with IP routers. This design allows for network load balancing by forwarding of all HTTP and FTP traffic through the proxy server and all other IP traffic through the router.

Every proxy server in your network must have at least one network interface. Proxy servers with a single network interface can provide Web content caching and IP gateway services. To provide Internet connectivity, you must specify two or more network interfaces for the proxy server.

Internet Content Filters

Network Internet content filters can be hardware or software. Many network solutions combine both. Hardware appliances are usually connected to the same network segment as the users they will monitor. Other configurations include being deployed behind a firewall or in a DMZ, with public addresses behind a packet-filtering router. These appliances use access control filtering software on the dedicated filtering appliance. The device monitors every packet of traffic that passes over a network.

Protocol Analyzers

Protocol analyzers can be placed in-line or in between the devices from which you want to capture the traffic. If you are analyzing SAN traffic, the analyzer can be placed outside the direct link with the use of an optical splitter. The analyzer is placed to capture traffic between the host and the monitored device.

Logical Access Control Methods

In this section, we focus on the logical methods of access control. Logical controls are important to infrastructure security because these controls are part of assessing your environment and protecting it to mitigate threats and risks. Insider threats are very real, and the more access someone has, the bigger the threat he or she can become. Logical access controls are used in addition to physical security controls to limit access to data. This design helps ensure the integrity of information, preserve the confidentiality of data, and maintain the availability of information. In addition, it helps the organization conform to laws, regulations, and standards. This section covers the most common methods used for logical access control. Chapter 5, “Access Control and Authentication Basics,” focuses on access control mechanisms and methods for secure network authentication.

The access level that users are given directly affects the level of network protection you have. Even though it might sound strange that the network should be protected from its own users, the internal user has the greatest access to data and the opportunity to either deliberately sabotage it or accidentally delete it.

Security Groups and Roles with Appropriate Rights and Privileges

When dealing with user access, a fine line often exists between enough access and too much access. In this section, we look at how to manage user access by using groups and group policies.

A user account holds information about the specific user. It can contain basic information such as name, password, and the level of permission the user has. It can also contain more specific information, such as the department the user works in, a home phone number, and the days and hours the user is allowed to log on to specific workstations. Groups are created to make the sharing of resources more manageable. A group contains users who share a common need for access to a particular resource. Even though the connotations may differ with each operating system, all of these terms still refer to the access that a user or group account is granted.

When working with logical controls, there are two models for assignment of permissions and rights: user-based and group-based. Within a user-based model, permissions are uniquely assigned to each account. One example of this is a peer-to-peer network or a workgroup where access is granted based on individual needs. This access type is also found in government and military situations and in private companies where patented processes and trademark products require protection. User-based privilege management is usually used for specific parts of the network or specific resources. This type of policy is time-consuming and difficult for administrators to handle, plus it does not work well in large environments.

Access control over large numbers of user accounts can be more easily accomplished by managing the access permissions on each group, which are then inherited by the group’s members. This is called group-based access control. In this type of access, permissions are assigned to groups, and user accounts become members of the groups. Each user account has access based on the combined permissions inherited from its group memberships. These groups often reflect divisions or departments of the company, such as human resources, sales, development, and management. Users can be placed in universal, global, or local groups. The last item that warrants mentioning is that in enterprise networks, groups may be nested. Group nesting can simplify permission assignment if you know how to use it, or it can complicate troubleshooting when you don’t know what was set up or why.

Exam Alert

By using groups, access control can be accomplished more efficiently and effectively by fewer administrators and with less overhead.

You will find that making groups and assigning users to these groups will make the administration process much easier. In Windows 2003, Active Directory Services provides flexibility by allowing two types of groups: security groups and distribution groups. Security groups are used to assign rights and permissions to groups for resource access. Distribution groups are assigned to a user list for applications or non-security-related functions. For example, a distribution group can be used by Microsoft Exchange to distribute mail.

Certain groups are installed by default. As an administrator, you should know what these groups are and know which accounts are installed by default. In dealing with individual accounts, the administrative account should be used only for the purpose of administering the server. Granting users this type of access is a disaster waiting to happen. An individual using the administrative account can put a company’s entire business in jeopardy. By knowing which accounts are installed by default, you can determine which are really needed and which can be disabled, thereby making the system more secure. You should also know which accounts, if any, are installed with blank passwords. The security settings in many of the newer operating systems do not allow blank passwords. However, there might still be accounts in older operating systems that have a blank password. User rights are applied to security groups to determine what members of that group can do within the scope of a Windows domain or forest. The assignment of user rights is through security options that apply to user accounts. The user rights assignment is twofold: It can grant specific privileges and it can grant log-on rights to users and groups in your computing environment. Log-on rights control who and how users log on to the computer, such as the right to log on to a system locally, whereas privileges allow users to perform system tasks, such as the right to back up files and directories. Although user rights can apply to individual user accounts, they are best administered by using group accounts.

When working with groups, remember a few key items. No matter what OS you are working with, if you are giving a user full access in one group and no access in another group, the result will be no access. However, group permissions are cumulative, so if a user belongs to two groups and one has more liberal access, the user will have the more liberal access, except where the no access permission is involved.

Exam Alert

When assigning user permissions, if the groups the user is assigned to have liberal access and another group has no access, the result is no access.

There are no exceptions. If a user has difficulty accessing information after he or she has been added to a new group, the first item you may want to check for is conflicting permissions.

Security Controls for File and Print Resources

Print and file sharing increases the risk of intruders being able to access any of the files on a computer’s hard drive. Locking down these shares is imperative because unprotected network shares are always easy targets and rank high in the list of top security exploits. Depending on your operating systems in use, there are two areas to look at: Server Message Block (SMB) file-sharing protocol and Common Internet File System (CIFS).

Determine whether file and print sharing is really needed. If it isn’t, unbind NetBIOS from TCP/IP. By doing so, you effectively disable Windows SMB file and print sharing. CIFS is a newer implementation of SMB that allows file and print sharing. Here are some recommendations for securing file and print sharing:

• Use an antivirus product that searches for CIFS worms.

• Run intrusion testing tools.

• Filter traffic on UDP/TCP ports 137, 138, 139, and 445.

• Install proper firewalls.

User education and mandatory settings can go a long way toward making sure that file sharing is not enabled unless needed. Finally, keep in mind that as Microsoft operating systems are installed, a number of hidden shares are created by default. Any intruder would be aware of this and can map to them if given the chance.

Access Control Lists

In its broadest sense, an access control list (ACL) is the underlying data associated with a network resource that defines the access permissions. The most common privileges are read, write to, delete, and execute a file. ACLs can apply to routers and other devices. For purposes of this discussion, however, we limit the definition to operating system objects. Every operating system object created has a security attribute that matches it to an ACL. The ACL has an entry for each system user that defines the access privileges to that object. In Microsoft operating systems, each ACL has one or more access control entries (ACEs). These are descriptors that contain the name of a user, group, or role. The access privileges are stated in a string of bits called an access mask. Generally, the object owner or the system administrator creates the ACL for an object.

ACLs can be broken down further into discretionary access control lists (DACLs) and system access control lists (SACLs). DACL use and SACL use are specific to Microsoft operating systems and are based on ACEs. A DACL identifies who or what is allowed access to the object. If the object does not have a DACL, everyone is granted full access. If the object’s DACL has no ACEs, the system denies all access. An SACL enables administrators to log attempts to access the object. Each ACE specifies the types of access attempts that cause the system to generate a record in the security event log.

Implementation of access management is based on one of two models: centralized or decentralized. Both the group-based and role-based methods of access control have a centralized database of accounts and roles or groups to which the accounts are assigned. This database is usually maintained on a central server that is contacted by the server providing the resource when a user’s ACL must be verified for access.

The drawback to the centralized model is scalability. As the company and network grow, it becomes more and more difficult to keep up with the tasks of assigning and managing network resource access and privileges. Decentralized security management is less secure but more scalable. Responsibilities are delegated, and employees at different locations are made responsible for managing privileges within their administrative areas. For example, in Microsoft Active Directory, this can be relegated by domain. Decentralized management is less secure because more people are involved in the process and there is a greater possibility for errors.

Group Policies

After you create groups, Group Policy can be used for ease of administration in managing the environment of users. This can include installing software and updates or controlling what appears on the desktop based on the user’s job function and level of experience. The Group Policy object (GPO) is used to apply Group Policy to users and computers. A GPO is a virtual storage location for Group Policy settings, which are stored in the Group Policy container or template. How companies use Group Policy depends on the level of client management required.

Exam Alert

An excessive number of group policies can create longer logon times, and if conflicting policies are implemented, you might have a difficult time tracking down why one of them isn’t working as it should.

In a highly managed environment where users cannot configure their own computers or install software, there will be considerable control over users and computers with Group Policy. In a minimally managed environment where users have more control over the environment, Group Policy will be used minimally. Group Policy is versatile and can be used with Active Directory to define standards for the whole organization or for the members of a single workgroup, location, or job function.

Group Policy enables you to set consistent common security standards for a certain group of computers and enforce common computer and user configurations. For example, you can use Group Policy to restrict the use of USB devices in a group of computers. It also simplifies computer configuration by distributing applications and restricting the distribution of applications that may have limited licenses. To allow this wide range of administration, GPOs can be associated with or linked to sites, domains, or organizational units. Because Group Policy is so powerful, various levels of administrative roles can be appointed. These include creating, modifying, and linking policies.

Group Policy can be applied at multiple levels in Active Directory. It is important that you understand policy application order and the effect that it can have on the resulting security policy of a computer. Group policies are applied in a specific order or hierarchy. By default, a group policy is inherited and cumulative. GPOs are processed in the following order:

1.  The local GPO

2.  GPOs linked to sites

3.  GPOs linked to domains

4.  GPOs linked to organizational units

The order of GPO processing is important because a policy applied later overwrites a policy applied earlier. Group policies get applied from the bottom up. So if there is a conflict, the policy higher up in the list will prevail. Now let’s talk about the exceptions. The default order of processing has the following exceptions:

• If the computer is a workgroup member rather than a domain member, only the local policy is applied.

• Any policy except for the local one can be set to No Override, meaning none of its policy settings can be overridden.

• Block Inheritance can be set at the site, domain, or organizational unit level so that policies are not inherited; however, if the policy is marked No Override, it cannot be blocked.

• Loopback is an advanced setting that provides alternatives to the default method of obtaining the ordered list of GPOs.

As you can see, Group Policy can be tricky to configure after you put numerous policies in place. To troubleshoot Group Policy appropriately, know the order of application and the exceptions. Group Policy changes can be audited, and thus you can track any changes made and confirm their validity.

Password Policy

Because passwords are one of the best methods of acquiring access, password length, duration, history, and complexity requirements are all important to the security of the network. When setting up user accounts, proper planning and policies should be determined. Passwords are one of the first pieces of information entered by a user. Strong passwords can be derived from events or things the user knows and are discussed in Chapter 12, “Organizational Controls.” Make users aware of these requirements and the reasons for them. Consider the following when setting password policies:

• Make the password length at least eight characters and require the use of uppercase and lowercase letters, numbers, and special characters.

• Lock user accounts out after three to five failed logon attempts. This policy stops programs from deciphering the passwords on locked accounts.

Require users to change passwords every 60 to 90 days, depending on how secure the environment needs to be. Remember that the more frequently users are required to change passwords, the greater the chance that they will write them down.

• Set the server to not allow users to use the same password over and over again. Certain operating systems have settings that do not allow users to reuse a password for a certain length of time or number of password changes.

• Never store passwords in an unsecure location. Sometimes a company may want a list of server administrative passwords. This list might end up in the wrong hands if not properly secured.

• Upon logon, show a statement to the effect that network access is granted under certain conditions and that all activities may be monitored. This way you can be sure that any legal ramifications are covered.

If you are using Windows servers on your network, you will most likely have domains. Domains have their own password policy in addition to the local password policy. These are two different policies, and you need to understand the difference between them.

Domain Password Policy

Password policies help secure the network and define the responsibilities of users who have been given access to company resources. You should have all users read and sign security policies as part of their employment process. Domain password policies affect all users in the domain. The effectiveness of these policies depends on how and where they are applied. The three areas that can be configured are password, account lockout, and Kerberos policies. When configuring these settings, keep in mind that you can have only one domain account policy. The policy is applied at the root of the domain and becomes the policy for any system that is a member of the domain in Windows Server 2003 and earlier server versions.

Domain password policies control the complexity and lifetime settings for passwords so that they become more complex and secure. This reduces the likeliness of a successful password attack. Table 4.1 lists the default settings for Windows 2003 SP1.

Table 4.1. Default Password Policy Settings

Image

All the settings in Table 4.1 should be configured to conform to the organization’s security policy. Also, setting the change frequency and password complexity too strictly can cause user frustration, leading to passwords being written down.

The account lockout policy can be used to secure the system against attacks by disabling the account after a certain number of attempts, for a certain period of time. The Kerberos policy settings are used for authentication services. In most environments, the default settings should suffice. If you do need to change them, remember that they are applied at the domain level.

Time-of-Day Restrictions and Account Expiration

Besides password restrictions, logon hours can be restricted in many operating systems. By default, all domain users can log on at any time. Many times, it is necessary to restrict logon hours for maintenance purposes. For example, at 11:00 P.M. each evening, the backup is run; therefore, you might want to be sure that everyone is off of the system. Or if databases get re-indexed on a nightly basis, you might have to confirm that no one is on them. This is also a good way to be sure that a hacker isn’t logging on with stolen passwords. Logon hours can be restricted by days of the week, hours of the day, or both. Each OS is different, so the effect of the restrictions will differ if the user is currently logged on when the restriction time begins. In a Microsoft environment, whether users are forced to log off when their logon hours expire is determined by the Automatically Log Off Users setting. In other environments, the user may be allowed to stay logged on, but once logged off, the user cannot log back on. The logon schedule is enforced by the Kerberos Group Policy setting Enforce User Logon Restrictions, which is enabled by default in Windows Server 2003.

You can also assign time-of-day restrictions to ensure that employees use computers only during specified hours. This setting is useful for organizations where users require supervision, where security certification requires it, or where employees are mainly temporary or shift workers.

The account expires attribute specifies when an account expires. This setting may be used under the same conditions as mentioned previously for the time-of-day restrictions. Temporary or contract workers should have user accounts that are valid only for a certain amount of time. This way when the account expires, it can no longer be used to log on to any service. Statistics show that a large number of temporary accounts are never disabled. Limiting the time an account is active for such employees should be part of the policies and procedures. In addition, user accounts should be audited on a regular basis.

Logical Tokens

This section focuses on logical tokens. Physical tokens are discussed in Chapter 5. An access token is created whenever a user logs on to a computer, or an attempt is made to access a resource, as part of the authentication process. An access token contains information about the identity and privileges associated with the security principal, such as users, groups, computers, or domain controllers. A security identifier (SID) is a unique value that identifies a security principal. In a Microsoft Windows environment, a SID is issued to every security principal when it is created. A user’s access token includes SIDs of all groups to which the user is a member. When a user logs on and authentication succeeds, the logon process returns a SID for the user and a list of SIDs for the user’s security groups; these comprise the access token.

Because of a system limitation, the field that contains the SIDs of the principal’s group memberships in the access token can contain a maximum of 1,024 SIDs. If there are more than 1,024 SIDs in the principal’s access token, the local security authority (LSA) cannot create an access token for the principal during the logon attempt. If this happens, the principal cannot log on or access resources.

Physical Control

When evaluating the physical security of the infrastructure, the security team should coordinate the security setup of the facility and surrounding areas, identify which groups are allowed to enter different areas, and determine the method of authentication to be used. As you deploy the new security systems, include training on how to use the systems. The timing of training should be coordinated so that training and physical deployment finish at about the same time.

As with all facets of security, physical security must be maintained. If maintenance is overlooked, the system will begin to fall apart. Broken locks, loose doorknobs, and cracked windows will let a potential intruder know that you are not maintaining your security systems. In addition, if security mechanisms are left in poor or nonfunctional condition, employees will bypass the security to get their jobs done. This will compromise the entire system and make the original investment of time and money worthless. Chapter 5 discusses physical access in greater detail.

This brings us to the next topic: the investment of time and money and the return on investment and calculation of risk. To protect the infrastructure, security vulnerabilities must be presented in terms of dollars and cents. Before funding a project, a formal business case analysis should be performed.

Risk and Return on Investment

You have already learned about a variety of software and hardware solutions that will make the infrastructure safer, but to justify the cost, you must know how to calculate the return on investment. Items such as antivirus software, firewalls, intrusion-detection systems, and virtualized environments do not generate revenue. IT is a cost center. By identifying assets, threats, and vulnerabilities, you can make informed decisions about a solution’s cost-effectiveness.

Identifying Risk

Risk is the possibility of loss or danger. Risk management is the process of identifying and reducing risk to a level that is comfortable and then implementing controls to maintain that level. Risk analysis helps align security objectives with business objectives. Chapter 7, “Intrusion Detection and Security Baselines,” explains the options available when dealing with risk. Here, we deal with how to calculate risk and return on investment. Risk comes in a variety of forms. Risk analysis identifies risks, estimates the impact of potential threats, and identifies ways to reduce the risk without the cost of the prevention outweighing the risk.

The annual cost of prevention against threats is compared to the expected cost of loss, for a cost/benefit comparison. To calculate costs and return on investment, you must first identify your assets, the threats to your network, your vulnerabilities, and what risks result. For example, a virus is a threat; the vulnerability would be not having antivirus software; and the resulting risk would be the effects of a virus infection. All risks have loss potential. Because security resources will always be limited in some manner, it is important to determine what resources are present that may need securing. Then, you need to determine the threat level of exposure that each resource creates and plan your network defenses accordingly.

Asset Identification

Before you can determine which resources are most in need of protection, it is important to properly document all available resources. A resource can refer to a physical item (such as a server or piece of networking equipment), a logical object (such as a website or financial report), or even a business procedure (such as a distribution strategy or marketing scheme). Sales demographics, trade secrets, customer data, and even payroll information could be considered sensitive resources within an organization. When evaluating assets, consider the following factors:

• The original cost

• The replacement cost

• Its worth to the competition

• Its value to the organization

• Maintenance costs

• The amount it generates in profit

After assets have been identified and valued, an appropriate dollar amount can be spent to help protect those assets from loss.

Risk and Threat Assessment

After assets have been identified, you must determine the assets’ order or importance and which assets pose significant security risks. During the process of risk assessment, it is necessary to review many areas, such as the following:

• Methods of access

• Authentication schemes

• Audit policies

• Hiring and release procedures

• Isolated services that may provide a single point of failure or avenue of compromise

• Data or services requiring special backup or automatic failover support

During a risk assessment, it is important to identify potential threats and document standard response policies for each. Threats may include the following:

• Direct access attempts

• Automated cracking agents

• Viral agents, including worms and Trojan horses

• Released or dissatisfied employees

• Denial-of-service (DoS) attacks or overloaded capacity on critical services

• Hardware or software failure, including facility-related issues such as power or plumbing failures

When examining threat assessment, the likelihood that the threats you’ve identified might actually occur is considered. To gauge the probability of an event occurring as accurately as possible, you can use a combination of estimation and historical data. Most risk analyses use a fiscal year to set a time limit of probability and confine proposed expenditures, budget, and depreciation.

Vulnerabilities

After you have identified all sensitive assets and performed a detailed risk assessment, it is necessary to review potential vulnerabilities and take actions to protect each asset based on its relative worth and level of exposure. Evaluations should include an assessment of the relative risk to an organization’s operations, the ease of defense or recovery, and the relative popularity and complexity of the potential form of attack. Because of the constant discovery of new vulnerabilities, it is vital to include a review of newly discovered vulnerabilities as part of your standard operating procedures.

Calculating Risk

To calculate risk, use this formula:

Risk = Threat × Vulnerability

To help you understand this, let’s look at an example using DoS attacks. Firewall logs indicate that the organization was hit hard one time per month by a DoS attack in each of the past six months. We can use this historical data to estimate that it’s likely we will be hit 12 times per year. This information will help you calculate the single loss expectancy (SLE) and the annual loss expectancy (ALE).

SLE equals asset value multiplied by the threat exposure factor or probability. The formula looks like this:

Asset value × Probability = SLE

The exposure factor or probability is the percentage of loss that a realized threat could have on a certain asset. In the DoS example, let’s say that if a DoS were successful, 25 percent of business would be lost. The daily sales from the website are $100,000, so the SLE would be $25,000 (SLE = $100,000 × 25 percent). The possibility of certain threats is greater than that of others. Historical data presents the best method of estimating these possibilities.

After you calculate the SLE, you can calculate the ALE. This gives you the probability of an event happening over a single year’s time. This is done by calculating the product of the SLE and the value of the asset. ALE equals the SLE times the ARO (annualized rate of occurrence):

SLE × ARO = ALE

The ARO is the estimated possibility of a specific threat taking place in a one-year time frame. When the probability that a DoS attack will occur is 50%, the ARO is 0.5. Going back to the example, if the SLE is estimated at $25,000 and the ARO is .5, our ALE is 12,500. ($25,000 × .5 = $12,500). If we spent more than that, we might not be prudent because the cost would outweigh the risk.

Other risk models for calculating risk include the cumulative loss expectancy (CLE) and Iowa risk model. The cumulative loss expectancy (CLE) model calculates risk based on single systems. It takes into account all the threats that are likely to happen to this system over the next year, such as natural disasters, malicious code outbreak, sabotage, and backup failure. The Iowa risk model determines risk based on criticality and vulnerability.

Calculating ROI

Return on investment is the ratio of money realized or unrealized on an investment relative to the amount of money invested. Because there are so many vulnerabilities to consider and so many different technologies available, calculating the ROI for security spending can prove difficult. The formulas present too many unknowns. Many organizations don’t know how many actual security incidents have occurred, nor have they tracked the cost associated with them. One method that may be helpful in this area is called reduced risk on investment (RROI). This method enables you to rank security investments based on the amount of risk they reduce. Risk is calculated by multiplying potential loss by the probability of an incident happening and dividing the result by the total expense:

RROI = Potential loss × (Probability without expense − Probability with expense) / Total expense

By using this formula, alternative security investments can be based on their projected business value.

Another approach is to look at security as loss prevention. It can be equated to loss prevention in that attacks can be prevented. ROI is calculated using the following formula:

ROI = Loss prevented − Cost of solution

If the result of this formula is a negative number, you spent more than the loss prevented.

Exam Prep Questions

1. Which of the following best describes the formula for calculating single loss expectancy?

Image A. Potential loss × (Probability without expense − Probability with expense) / Total expense.

Image B. Calculates risk based on criticality and vulnerability

Image C. Asset value multiplied by the threat exposure factor or probability

Image D. The estimated possibility of a specific threat taking place in a one-year time frame

2. Which of the following is the process of identifying and reducing risk to a level that is comfortable and then implementing controls to maintain that level?

Image A. Return on investment

Image B. Risk

Image C. Risk analysis

Image D. Risk management

3. Which of the following are the best reasons for the use of virtualized environments? (Choose two correct answers.)

Image A. Reduced need for equipment

Image B. Reduced threat risk

Image C. Capability to isolate applications

Image D. Capability to store environments on USB devices

4. Your company is in the process of locking down CIFS and SMB file and print sharing. Which of the following ports do you have to secure? (Select all correct answers.)

Image A. 161

Image B. 139

Image C. 138

Image D. 162

5. Which of the following are recommended password account policies? (Select all correct answers.)

Image A. Make the password length at least eight characters and require the use of uppercase and lowercase letters, numbers, and special characters

Image B. Require users to change passwords every 60 to 90 days

Image C. Lock user accounts out after one to two failed logon attempts

Image D. Set the server to not allow users to use the same password over and over again

6. When evaluating assets which of the following factors must be considered? (Choose three.)

Image A. The replacement cost

Image B. Its worth to the competition

Image C. Its value to the organization

Image D. Its salvage value

7. Which of the following are uses for proxy servers? (Choose all correct answers.)

Image A. Intrusion detection

Image B. Internet connectivity

Image C. Load balancing

Image D. Web content caching

8. Which of the following is the most common method used in an antivirus program?

Image A. Integrity checking

Image B. Scanning

Image C. Heuristics

Image D. Metrics

9. A peer-to-peer network or a workgroup where access is granted based on individual needs is an example of which type of access control?

Image A. Group-based access control

Image B. Mandatory access control

Image C. Role-based access control

Image D. User-based access control

10. Which of the following groups is the most appropriate for email distribution lists?

Image A. Only distribution groups.

Image B. Only security groups.

Image C. Neither one; you must use a mail application group.

Image D. Both security and distribution groups.

Answers to Exam Prep Questions

1. C. SLE equals asset value multiplied by the threat exposure factor or probability. Answer A is incorrect because it describes reduced risk on investment (RROI). Answer B is incorrect because it describes the Iowa risk model. Answer D is incorrect because it describes annualized rate of occurrence.

2. D. Risk management is the process of identifying and reducing risk to a level that is comfortable and then implementing controls to maintain that level. Answer A is incorrect because return on investment is the ratio of money realized or unrealized on an investment relative to the amount of money invested. Answer B is incorrect because risk is the possibility of loss or danger. Answer C is incorrect because risk analysis helps align security objectives with business objectives.

3. A, C. Virtual environments can be used to improve security by allowing unstable applications to be used in an isolated environment and by providing better disaster recovery solutions. Virtual environments are used for cost-cutting measures as well. One well-equipped server can host several virtual servers. This reduces the need for power and equipment. Forensic analysts often use virtual environments to examine environments that might contain malware, or as a method of viewing the environment in the same way as the criminal. Answer B is incorrect because virtualized environments, if compromised, can provide access to not only the network, but also to any virtualization infrastructure. This puts a lot of data at risk. Answer D is incorrect because the capability to store environments on USB devices puts data at risk.

4. B, C. SMB and CIFS use UDP/TCP ports 137, 138, 139, and 445. Answers A and D are incorrect because 161 and 162 are used by SNMP.

5. A, B, and D. Good password policies include making the password length at least 8 characters; requiring the use of uppercase and lowercase letters, numbers, and special characters; requiring users to change passwords every 60 to 90 days; and setting the server to not allow users to use the same password over and over again. Answer C is incorrect because locking user accounts out after one to two failed logon attempts will cause undue stress on the help desk.

6. A, B, and C. When evaluating assets, you must consider their replacement cost, their worth to the competition, and their value to the organization. Answer D is incorrect because an asset’s salvage value is not factored in.

7. B, C, and D. Proxy servers can be placed between the private network and the Internet for Internet connectivity or internally for Web content caching. If the organization is using the proxy server for both Internet connectivity and Web content caching, the proxy server should be placed between the internal network and the Internet, with access for users who are requesting the Web content. In some proxy server designs, the proxy server is placed in parallel with IP routers. This allows for network load balancing by forwarding of all HTTP and FTP traffic through the proxy server and all other IP traffic through the router. Answer A is incorrect because proxy servers are not used for intrusion detection.

8. B. The most common method used in an antivirus program is scanning. Answers A and C are incorrect because in the past antivirus engines used a heuristic engine for detecting virus structures or integrity checking as a method of file comparison. The issue with these methods is that they are susceptible to false positives and cannot identify new viruses until the database is updated. Answer D is incorrect because metrics are associated with network monitoring tools.

9. D. Within a user-based model, permissions are uniquely assigned to each account. Answer B incorrect because in the past antivirus engines used a heuristic engine for detecting virus structures or integrity checking as a method of file comparison. The issue with these methods is that they are susceptible to false positives and cannot identify new viruses until the database is updated. Answer D is incorrect because in group-based access control permissions are assigned to groups.

10. A. Distribution groups are assigned to a user list for applications or non-security-related functions. For example, a distribution group can be used by Microsoft Exchange to distribute mail. Answers B and D are incorrect because the most appropriate use of security groups is to assign rights and permissions to groups for resource access. Answer C is incorrect because you do not need to use a mail application group.

Additional Reading and Resources

1. Bragg, Roberta. CISSP Training Guide. Que, 2002.

2. Firewall architectures: http://www.invir.com/int-sec-firearc.html

3. Microsoft Server 2003 Security Guide: http://www.microsoft.com/downloads/details.aspx?FamilyID=8a2643c1-0685-4d89-b655-521ea6c7b4db&DisplayLang=en

4. National Institute of Standards and Technology (NIST) Firewall Guide and Policy Recommendations: http://csrc.nist.gov/publications/nistpubs/800-41/sp800-41.pdf

5. Odom, Wendell. CCENT/CCNA ICND1 Official Exam Certification Guide (CCENT Exam 640-822 and CCNA Exam 640-802), 2nd Edition. Cisco Press, 2007.

6. Odom, Wendell. CCNA ICND2 Official Exam Certification Guide (CCNA Exams 640-816 and 640-802), 2nd Edition. Cisco Press, 2007.

7. Security tools: http://www.securitymetrics.com/securitytools.adp

8. SANS InfoSec Reading Room - Physical Security: http://www.sans.org/reading_room/whitepapers/physcial/

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset