2.3. Protection and Prevention

In this section, we consider security mechanisms for protection against and prevention of security attacks. We consider firewalls and perimeter security in Section 2.3.1 and cryptographic protocols in Section 2.3.2. The interested reader is referred to Northcutt et al. [7] and Cheswick et al. [8] for more details on firewalls. A good reference that considers cryptography and cryptographic protocols is Stinson [9].

2.3.1. Firewalls and Perimeter Security

To block malicious packets from entering a network, it is common to employ firewalls. Firewalls in olden days referred to thick walls of brick constructed especially for preventing the spread of fires from one building to another. Firewalls today refer to hardware, software, and policies to prevent the spread of security attacks into an organization’s (or individual’s) network or host. As discussed previously in Section 2.2, attacks of many kinds occur due to maliciously crafted packets that arrive at the target network. If such packets can be identified and discarded, they will no longer be a threat to the security of the network. This is in essence the idea behind firewalls. However, it is not trivial to efficiently identify such packets correctly all the time. As shown in Figure 2.2, the firewall sits between the “inside” and the “outside.” The inside is usually what needs to be protected. The term firewall can mean many things today, from a simple packet filter to a complex intrusion prevention system that is capable of examining a series of packets and reconstructing sessions for comparison with known attack signatures.

Figure 2.2. Schematic of a firewall.


A packet filter is the simplest type of firewall. It filters incoming or outgoing packets based on rules created manually by the administrator of a network. Packet filters usually have a default “drop” policy. This means that if a packet does not satisfy any of the rules that allow it into the inside, it is dropped. Each packet is considered independently without consideration of previous or future packets, making packet filters fast and capable of handling high data rates. The simpler the rules are, the faster the filtering and the smaller the performance hit. Cisco’s standard access control lists (ACLs) filter packets based solely on source IP addresses. In this case, it is easy to filter packets with source IP addresses that are obviously spoofed or other packets from sources that are not expected to communicate with the inside. Examples are IP packets that arrive from the outside with nonroutable source IP addresses, loopback IP addreses, or IP addresses that belong to hosts on the inside. However, standard ACLs cannot block packets to specific hosts on the inside or packets that correspond to specific protocols. The extended ACL from Cisco allows a packet filter to look at source and destination IP addresses, TCP or UDP port numbers, and TCP flags and make decisions on whether or not a packet should be allowed into the inside. Other firewall software (e.g., IPTables in Linux) and hardware have equivalent access control lists for filtering packets.

The rules in the packet filter are considered in strict order creating potential for configuration errors as the list of rules grows in size. One way of overcoming this problem is to use so-called dynamic packet filters or stateful firewalls. Dynamic packet filters build rules on the fly. The assumption is that hosts on the inside are to be trusted. When they send packets to open connections with hosts on the outside, a stateful firewall builds a rule on the fly that allows packets from the specific external host (and port number at that host) to the specific internal host (and the port number at this host). The rule is deleted when the connection is terminated. This reduces the number of hard-coded rules and makes it difficult for Oscar to guess what packets may make it through a firewall.

Packet filters can still be fooled through a variety of loopholes that exist (e.g., by sending fragmented packets). In order to determine whether or not packets are legitimate, it is often necessary to look at the application payload. Sometimes it is even necessary to reconstruct the application data. This is possible if proxy firewalls are used. Proxy firewalls consist of hardened hosts (usually dual-homed) that run reduced modules of certain applications. When an internal host makes a connection to the outside, it really makes a connection (say TCP) with the proxy firewall. The proxy then makes a connection to the external host. Thus, there are two connections that exist. External hosts only see the proxy firewall. They are not even aware of the existence of other internal hosts. When packets are returned, they make their way up the protocol stack where the application (with reduced features) reconstructs the data. If the data is legitimate, it is forwarded to the internal host. Moreover, Oscar can gain very little knowledge during reconnaissance because internal hosts are not visible to the outside world. However, proxy firewalls create performance bottlenecks. They also do not support a variety of applications, often frustrating legitimate network communications.

Architectural approaches can approximate the benefits of proxy firewalls, and yet keep performance levels reasonable. One common approach is to screen the inside from the outside by using one or more packet filters. In Figure 2.3, for example, packet filter A allows packets (from most legitimate hosts on the outside) through interface p to reach either the web server or the mail server. As almost anyone can reach these servers, this is called a demilitarized zone (DMZ). If it is also a router, it does not advertise the existence of the inside network to the outside world. Similarly, packet filter B allows packets from either the web server or the mail server to the inside through interface r. Thus, the inside network is screened from the outside.

Figure 2.3. Schematic of a screened subnet and demilitarized zone.


Note that packet filters can also be used to stop packets from the inside from going out (e.g., through interfaces s and q in Figure 2.3). This may be necessary if hosts on the inside have been compromised and are launching attacks, or hosts are trying to access services not allowed by corporate policy.

Nowadays, firewalls are more than simple packet filters. They can maintain state, do load balancing (if multiple firewalls are used), do some inspection of application payloads, detect attacks based on known signatures, maintain logs useful for forensics or analysis, and also act as endpoints for connectivity to mobile users who need to connect to the inside from the outside. For example, firewalls can now be the terminating points for virtual private network (VPN) connections using IPSec or SSL, which make use of cryptography to prevent outsiders from connecting to the inside or monitoring connections made by mobile employees. We discuss cryptographic protocols next.

2.3.2. Cryptographic Protocols

Security services such as confidentiality and integrity can be provided to communication protocols using cryptography. In this section, we provide a brief overview of the important topics in cryptography and cryptographic protocols. More details can be found in Stallings [6], Cheswick et al. [8], and Kaufmann [10].

Cryptographic protocols make use of cryptographic primitives that are used to provide the required security services. A classification of such primitives is shown in Figure 2.4. Cryptology is the broad discipline that includes the science of designing ciphers (cryptography) and that of breaking ciphers (cryptanalysis). Data that is encrypted is called “plaintext” and the result of encryption is called “ciphertext.” Ciphers or encryption algorithms can be classified into secret key and public key categories.

Figure 2.4. Classification of cryptographic primitives.


In the case of secret key encryption, two honest parties, say Alice and Bob, share a secret key k that is used with an encryption algorithm. Both encryption and decryption make use of the same key k and both parties have knowledge of the key. Secret key algorithms can further be classified into block ciphers and stream ciphers. Block ciphers encrypt “blocks” of data (e.g., 64, 128, or 256 bits) at a time. Each block is encrypted with the same key. Common block ciphers include the Advanced Encryption Standard (AES), Blowfish, and CAST. Stream ciphers use the key k to generate a key stream. The key stream is XORed with the data stream to create the ciphertext. At the receiver, the same key stream is generated and XORed with the ciphertext to obtain the data. Block ciphers can be used to create key streams through standard modes of operation [6, 9]. RC-4 is a common stream cipher that is not derived from a block cipher. It is recommended that the key size for good security with block or stream ciphers should be at least 128 bits today. It is common to assume that everyone, including Oscar, knows the encryption algorithms, but the key is secret and known only to honest communicating parties, in this case, Alice and Bob.

Public key encryption is based on the property that given a pair of related information, one part of the information can be revealed. However, the other part of the information cannot be discovered even with knowledge of the first part. For example, if some large prime numbers are randomly selected and multiplied, revealing the product does not enable others to guess or calculate the prime numbers that are factors of the product. This property is used in RSA. The information that is revealed is called the “public key” and the information kept secret is called the “private key.” To encrypt information, the public key is used. To decrypt information, the private key is used. Another mathematical technique used for public key encryption is based on discrete logarithms. Because of the mathematical nature of public key encryption, key sizes are typically longer for good security—around 1,024 bits for RSA.

Public key encryption is also computationally expensive. Consequently, it is common to use public key encryption for key establishment and digital signatures. Confidentiality and integrity of bulk data are achieved using secret key schemes. Although the public key of an honest party like Alice can be made public, its authenticity needs to be verified since Oscar can claim to be Alice and publish his key as hers. It is common to use digital certificates signed by one of a few trusted certification authorities to verify the authenticity of the public key (see below for more on digital signatures). This approach is used in modern web browsers for e-commerce applications.

We also include hash functions in the classification in Figure 2.4. They are not strictly encryption schemes. They map any sized data to a fixed-size digest. Given the digest, it is considered infeasible to obtain any data that maps to the digest if the size of the digest is at least 160 bits. Popular hash functions in use today are MD-5 and SHA.

Block ciphers and hash functions can be used to create message authentication codes (MACs) or message integrity checks (MICs). These are checksums on data created using block ciphers or hash functions with a shared secret key between the communicating parties. MACs or MICs provide message authentication and integrity. If Oscar were to fabricate a message or modify a legitimate message, the checksum would always fail, alerting the receiver of a problem with the received data. The Cipher Block Chaining MAC (CBC-MAC) that uses block ciphers and keyed-hash MAC (HMAC) that employs hash functions are popular standard implementations of MACs.

Digital signatures are like physical signatures. They attest some information and are bound to that information. Typically this involves encrypting the hash value of some information with the private key of a public key/private key pair. Suppose Alice generated some data and created a digital signature of the data. Anyone can verify the signature because decrypting the signature requires the public key, which is available to everyone. No one except Alice can generate the signature because she is the only one in possession of the private key. Recall that knowledge of the public key does not help Oscar or others deduce the private key.

The cryptographic primitives discussed above are used in cryptographic protocols, which are designed with specific security objectives in mind. Cryptographic protocols are notoriously hard to design since they will likely have pitfalls that are hard to detect [10]. A good example of a cryptographic protocol that fails to meet most of its security objectives is the Wired Equivalent Privacy (WEP) protocol used in legacy IEEE 802.11 wireless local area networks [11]. Moreover, cryptographic primitives make use of keys shared between communicating parties. Establishing secret keys between legitimate parties interested in communicating, such that Oscar does not obtain any knowledge of the keys, is not trivial and requires cryptographic protocols. Key establishment is usually based on master keys established with trusted third parties or public key cryptography.

Most well-designed cryptographic protocols have three phases. In the first phase, the communicating entities identify or authenticate themselves to one another. In some cases the entity authentication is unilateral (i.e., Alice authenticates herself to Bob, but not vice versa). Entity authentication makes use of passwords, PIN, pass phrases, biometrics, security tokens, and the like. Challenge-response protocols that do not require an entity to reveal the password, but only demonstrate knowledge of the password, are commonly used for entity authentication. In the second phase, or as part of the first phase, the communicating entities also establish keys for security services to be provided next. Establishment of keys can be in two ways: key transport or distribution, where one party generates the keys (or a master key) and transports them securely to the other party, or key agreement, where both parties exchange information used in the secure creation of the same key at both ends. It is common for both parties to exchange random numbers, sequence numbers, or time stamps (called nonces, or numbers used once) that are used as input in key generation. In the third phase, the established keys are used to provide confidentiality (through encryption with a block or stream cipher) and integrity (through MACs or MICs). We briefly describe some examples in the following sections.

Kerberos

Kerberos is used for authenticating users when they access services from workstations, typically on a local area network. An authentication server shares a password with all users and a key with a ticket-granting server. When a user logs on to a workstation, the workstation contacts the authentication server. The authentication server issues a ticket to the user and also sends a key that the user will share with the ticket-granting server. This key is encrypted with the user’s password. The workstation will not be able to retrieve the key if the user is not legitimate. Thus, recovery of the key to be shared with the ticket-granting server indirectly authenticates the user. Note that in this phase, a key has been transported to the user as well. Of course, this assumes that a password has been manually shared between the user and the authentication server. The ticket itself is encrypted with a key shared between the authentication server and the ticket-granting server. It includes, among other things, the key that has been transported to the user. When the user desires to access a service, the workstation presents the ticket to the ticket-granting server and a message authentication code created using the key that was initially received from the authentication server. This verifies the user’s legitimacy to the ticket-granting server, which then issues a key and a ticket to the workstation for use with the requested service. A similar authentication mechanism is used with the server providing the service. Kerberos is more complicated than what has been described here. More details are available in Stallings [6] and Kaufmann et al. [10]

IPSec

IPSec encrypts all IP traffic between two hosts, or two networks, or combinations of hosts with possibly different terminating points for different security services. Keys may be manually established or a very complex protocol called Internet Key Exchange (IKE) can be used for authenticating entities to one another and establishing keys. Keys are established as part of a unidirectional “security association” that specifies the destination IP address, keys, encryption algorithms, and “protocol” to be used. “Protocol” here corresponds to one of two specific security services provided by IPSec: Authentication Header (AH) and Encapsulated Security Payload (ESP). In AH, a MAC is created on the entire IP packet minus the fields in the IP header that change in transit. This enables the receiver to detect spoofed or modified IP packets. However, the payload is in plaintext and visible to anyone who may be capable of capturing the IP packet. ESP provides confidentiality and integrity to the payload of the IP packet but not the header. Use of the two protocols in the above manner is called “transport mode.” It is also possible to use a “tunnel mode” where the original IP packet is tunneled in another IP packet. This makes the original IP packet the payload, thereby protecting it completely.

SSL

The secure sockets layer (the latest version is called transport layer security or TLS) is used in web browsers to secure data transfer, especially for e-commerce applications, banking, and other confidential transactions. At a high level, the browser is not required to be authenticated by the server (although this is possible and optional in SSL). The user employing the web browser is authenticated using passwords or other techniques proprietary to the organization using the server. The server, however, is authenticated by the browser through its digital certificate. This provides the user some assurance that the transaction is taking place with a legitimate bank or e-commerce site. Note that the use of SSL is not the assurance of authenticity of the server since any site or any server could use SSL. It is the information contained in the digital certificate that authenticates the server. The digital certificate contains the public key of the server, signed by a certification authority. The browser creates a random secret, encrypts it with the server’s public key, and sends it to the server. This random secret, along with previously exchanged nonces, are used to generate keys (at both the server and the browser) that are used for encryption with block or stream ciphers (RC-4 is commonly used) and integrity with message authentication codes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset