Chapter    10

Securing Network Traffic

Infiltration is a very real problem for network administrators, one that can lead to confidential data being leaked outside of your controlled environment. Every day, new attacks are developed that try to breach a network’s security perimeter. Building a secure network requires that a number of key software and hardware components are implemented and configured correctly. But securing a network is not just about acquiring the right network hardware to block unwanted traffic. What is more important is understanding how a network works, how Internet traffic is managed, how information flows within that network, and what services need to be secured that control the traffic. One cannot fully secure what one does not understand.

Once these crucial elements have been explored, the methods of ensuring data packet protection we’ll go on to discuss will make sense. In this chapter, we will explore the essential concepts of network structures. Within those concepts, we will then discuss the steps you can take to make your network stronger against security breaches and unwanted network traffic.

Understanding TCP/IP

The Internet runs on a suite of communication protocols commonly known as TCP/IP. This stands for Transmission Control Protocol/Internet Protocol, which were the first two protocols to be defined. Over the years, the suite has expanded to include other protocols, such as User Datagram Protocol (UDP), a stateless Internet Protocol used for streaming media, and Domain Name Service (DNS), the protocol used to connect names with IP addresses and the most popular protocol used on the Internet. In order to understand network traffic, it’s important to understand what TCP/IP is and how it works. It is the suite of protocols upon which the majority of modern networks, including the Internet, are based. It is also one of the most common vectors exploited by network-based attacks.

This family of protocols is commonly interpreted as a set of layers, each comprising a different portion of the complicated task of moving data between systems. Each layer presents its own security problems, and effective security must address each layer independently.

Note  There are several layer models used to explain IP traffic. For the purposes of this condensed discussion of IP traffic, we will stick to the four-layer model laid out in RFC1122.

The path that data takes over the TCP/IP stack begins and ends at the user-level application layer. Here, “applications” doesn’t refer to a user-level program, such as Mail.app, but refers instead to the higher-level protocols that make the network useful: HTTP for serving web pages, POP and IMAP for receiving mail, and SMTP for sending mail. Securing the application layer can consist of limiting the applications that a user has access to, as discussed in Chapter 3. It can also entail using application-level encryption—everything from manually encrypting sensitive data using PGP (explained in further detail in Chapter 9) to automatically encrypting data using a secure protocol, such as SSH, which is discussed further in Chapter 15.

Application data is then presented to the transport layer, where a protocol, typically TCP, establishes a connection between the source and destination computers, providing reliable delivery of a stream of data from one computer to the other. In order to achieve this, TCP encapsulates the data into packets, manageable chunks of data with a source address and a destination address. In order to make sure the packets get to the correct application at the other end, TCP uses the concept of a port. This is a virtual construct that acts as an endpoint for the communication between the two computers. TCP ports are identified by numbers, and different protocols will usually “listen” in on the ports assigned to them.

Note  This isn’t strictly true, as a server will start a listener on any port you configure it to, provided there isn’t a listener for an application running on that port already. Some people recommend running services such as SSH on nonstandard ports as an added security precaution. You can see what port each service uses in the /etc/services file.

One of the most important steps in securing any network is limiting the number of incoming ports to only those that are necessary. For example, if a machine is not serving web pages, it should not accept traffic on port 80, which is the default port for HTTP. Unwanted software can bind itself to commonly used ports, giving malicious activity an air of legitimacy, and so even if you are not explicitly using a port you should still block traffic to it if that traffic is not required.

Moving packets from one address to another happens at the network layer, and is generally handled by IP. Packets move from machine to machine via a series of intermediate steps, commonly referred to as hops. If we use the analogy of commercial shipping, this resembles packing up items into boxes and attaching shipping labels to each box. The labels have a shipping address that includes a street address (IP address) and a name (port number). There’s also a return address with the same information. Once the shipping company (the network layer) delivers the packages to the appropriate building, it’s the responsibility of the shipping and receiving department (the transport layer) to ensure that all packages have arrived and are accounted for, and are delivered to the appropriate resident. That resident (the application layer) can finally assemble the contents of the packages.

Note  Packets are explained in further detail later in this chapter.

An example of security at the network layer is Network Address Translation (NAT), which presents a single IP address to the outside world, while maintaining a separate internal addressing scheme for the local network. Although it doesn’t necessarily secure your network from outside attacks, the less information an outside attacker knows about your internal network, the better. This concept of “security through obscurity” increases the difficulty of exploiting vulnerabilities. Having a single incoming access point rather than a large number of systems connected directly to the Internet can make it easier to deal with certain risks, such as Denial of Service (DoS) attacks. However, this would not help protect your systems from other hosts on your internal network.

The link or physical layer comprises the physical implementation of the network. For example, a wired Ethernet network consists of a Network Interface Card (NIC) in each host connected to the network, and the cabling and switches that connect them. Another example is a wireless AirPort network, accessed with AirPort cards in laptops and hosted by AirPort base stations. There are also fiber-optic ports, satellite signals, and DSL modems of the various Internet service providers, all parts of the physical layer.

The bigger your network is, the more vulnerable the physical layer. For a home user, physical security is as simple as using WPA2 encryption and a strong password on an AirPort network, as discussed in Chapter 12. For a large office, a larger number of switches and routers need to be secured. It is also important to look out for and stop unauthorized access points, spoofed MAC addresses, and Denial of Service attacks that may be launched, even unwittingly, by users.

Each layer has its own part to play and is generally ignorant of the implementation details of the other layers, which allows the TCP/IP stack to be rather scalable. The post office doesn’t tape up the package, and it isn’t concerned with what is done with the contents of the package once the recipient receives it. All it cares about is moving the package from one address to another. Similarly, when you pack your boxes, you neither know nor care whether they will be put in the back of a truck and driven across the country or packed with other items into a large container and flown across the country on a cargo jet. All you are concerned with is that they get there.

However, as a security expert, you can’t afford the luxury of this ignorance. You should be aware of that which you can control, and you should mitigate that which you can’t control.

Now that we’ve run through a quick synopsis of what network traffic is, we’ll discuss some of the various network topologies, management techniques for that traffic, and ways to safeguard network traffic from possible attacks.

Types of Networks

To some degree, there are about as many types of networks as there are network administrators. But they are all built using varying themes on one of two network architecture types: peer-to-peer networks and client-server networks.

Peer-to-Peer

A peer-to-peer (P2P) computer network is one that relies primarily on the computing power and bandwidth of the participants in the network to facilitate the interactivity on the network, rather than concentrating it in a centralized set of network servers and routers. (See Figure 10-1 for a graphical representation of a P2P network.) P2P networks are typically used for connecting nodes via ad-hoc connections. Such networks are useful for many purposes: assembling marketing materials, conducting research, and acquiring digital media assets (probably the most common use).

9781484217115_Fig10-01.jpg

Figure 10-1. Peer-to-peer networks

A wide variety of peer-to-peer applications are available for use, and each has its own specific feature set that makes it popular. BitTorrent sites and other peer-to-peer networks allow you to publish music, documents, and other media to the Internet and to access media published by others. However, peer-to-peer networking applications can use a considerable amount of bandwidth when they are not configured properly. Multiple computers running peer-to-peer applications can flood any network, from DSL to cable modems and even fiber. You will also need to configure them correctly to make sure you are not sharing private information, such as your address book or financial data.

Considerations When Configuring Peer-to-Peer Networks

When configuring a peer-to-peer networking application, you will usually want to share files on your computer as well as download files from other computers. If you do not share files to the P2P network, then your download bandwidth can be automatically limited by the application, and some computers will not even allow you to download files from them. Sharing is an essential part of peer-to-peer networking, so you’ll probably devote some bandwidth to others downloading your material. However, you should limit the bandwidth these applications are using as it can seriously affect other processes on your computer that perform their duties on the Internet.

Each application comes with the ability to limit incoming access in some way. One way to limit the bandwidth is by limiting the number of incoming connections that are allowed to access your data. Each program does it a bit differently. Look through the settings for those that allow you to configure the number of concurrent incoming and outgoing connections.

Another way to limit incoming connections is throttling bandwidth. Consider that someone accessing your computer may be running a cable modem or fiber-optic network, and might have available bandwidth of 10Mbps or more to access your files. If you are running only a DSL connection and they have FiOS, let’s say, their machine could cause your connection to be saturated while your computer is trying to keep up the pace. This could also cause your Internet speed to slow to a crawl. You can limit incoming connections to make sure you always have plenty of speed available for browsing the Internet. When configuring the settings of a P2P application, look for a section that allows you to limit maximum upload and download speeds. By limiting concurrent connections, you help ensure that your network does not become flooded with P2P traffic (which can result in a Denial of Service attack on your entire network if you’re not careful).

Another concern with peer-to-peer applications is limiting access to certain files. On P2P networks, users often share their entire Documents folder, exposing private data, such as their mail database and financial information, to the world. When installing a peer-to-peer application, make sure you know which folder is being shared and that the contents of that folder are limited to data you want accessible from outside your environment. For example, when using a file-sharing utility like LimeWire, you will be asked to choose a folder for shared data. Create a new folder in your home directory to share data from, and only allow LimeWire to share from that folder. You will also be asked to select which file types you would like to share with LimeWire. You should allow it to share only those types that you actually need to share.

One administrative concern with peer-to-peer file sharing is its potential to be used for illegal sharing of copyrighted material. Although preventing this type of traffic by blocking network ports is possible, it can sometimes be a moving target because some of these P2P protocols use random ports dynamically. The Mac OS X firewall (covered in greater detail in Chapter 11) can block P2P traffic by preventing traffic generated by specific applications. Blocking P2P traffic using the firewall requires diligence. There are a number of Gnutella clients for Mac OS X, and if you block all but one, you’ve left a window open. Some Internet appliances and filtering packages can be configured to tag traffic that appears to be P2P, tracking the session based on these tags, and thus overcoming any reliance on blocking traffic on any specific port.

Client-Server Networks

Over the years, a s P2P networks grew, they became unwieldy, making it more difficult to keep tabs on the computers that were linked together. This led to the development of client-server networks. Client-server networks are not ad hoc. Services on client-server networks are statically provisioned and centrally managed. In this model, much of the management of the network—assigning IP addresses, warehousing data, and managing bandwidth—happens at the server level, and not on the individual workstations. Because of this, client-server networks quickly became the primary weapon in combating unwieldy networks (see Figure 10-2 for a graphical representation of a client-server network).

9781484217115_Fig10-02.jpg

Figure 10-2. An example of a client-server network

Understanding Routing

As data moves between networks, you need to tell it where to go. Moving data through networks is called routing. The following sections show how to route data packets and how to secure the routing techniques used to move that data along. First, we will explore what packets are and then examine the various types of devices that packets encounter as they traverse the Internet. This includes gateways, routers, and firewalls.

Packets

To understand how routing data works, we first need to explore what a packet is. A packet is a general term for a bundle of data, organized in a predetermined way for transmission over computer networks. IP packets consist of two parts, the header and the data. The header marks the beginning of the packet and contains information, such as the size of the payload and the source and destination address. The data is the information being carried by this particular packet.

Note  Packets are sometimes referred to as datagrams. The terms are not interchangeable, however. A datagram is a type of packet, but not all packets are datagrams.

Different protocols use different conventions for distinguishing between different sections of a packet, and for formatting the data. The Ethernet protocol establishes the start of the header and the other data elements by their relative location to the start of the packet. For the purposes of understanding other technologies discussed throughout this chapter, just keep in mind that there are different ways to form a packet based on the protocol that is being used.

A good analogy when thinking about packets is to treat data transmission like moving into a new house. When moving our things, we tend to be efficient. Instead of loading one piece of furniture on to the truck, driving to the new house, unloading it, and then driving back for another piece, we move multiple pieces at a time. We also don’t cram everything into one giant box. We load our stuff from one room and put it in a box (or boxes) and label it before moving on to the next room. The header is the container (or box) for data (our stuff). The application packing up the data writes the name of the data in the header, much as we would write where the stuff in the box would go on a label on the box. This allows the network router (or movers) to know which room each box is destined for and typically which room the stuff came from. The router (or moving truck) will create a list of what was transmitted (moved). Most transfers of data will move more than one packet (box), breaking files into data packets (boxes) to move them more efficiently.

Gateways

A gateway is a device that connects two physical or logical networks. Some gateways mediate between networks that use different base protocols, and some relay traffic between two networks using the same protocol. All gateways should have a minimum of two addresses, one on each network to which they’re connected.

Routers

A gateway that forwards IP traffic between two networks is referred to as a router. Routers forward packets from one network to another. They use routing tables to help guide those packets to their destination. A route is the path that is taken by data traveling from one system or network to another. Routers maintain a table in which they will cache the paths the packets take to get to their destination, which makes communication between devices much quicker than it would be otherwise. The address of each device that the data touches on the path to its destination is a hop. Each entry in a routing table specifies the next hop (or several hops), resulting in reduced lookups and improved performance. Each hop maintains its own routing table unless it is the device that initiated or terminated the connection. These routing tables need to be consistent, or routing loops can develop, which can cause a number of problems. If the path from your system to another forms a loop, your system will be unable to contact the other system. Your system might also cause an inadvertent denial-of-service attack on other systems by resending packets along the looped route, taking bandwidth away from legitimate traffic.

One way to view hops is by using the traceroute command in Mac OS X, which will show each hop between your computer and a remote device. The traceroute command is followed by the remote hostname. For example, a traceroute for www.apple.com recently resulted in the following output:

traceroute to www.apple.com.akadns.net (17.251.200.32), 64 hops max, 52 byte packets
 1  192.168.1.1 (192.168.1.1)  3.501 ms  2.816 ms  2.659 ms
 2  10.67.152.1 (10.67.152.1)  4.271 ms  6.450 ms  6.128 ms
 3  10.1.176.1 (10.1.176.1)  4.670 ms  5.834 ms  5.067 ms
 4  147.225.49.89 (147.225.49.89)  4.686 ms  5.809 ms  5.337 ms

 5  152.161.241.70 (152.161.241.70)  10.346 ms  17.047 ms  10.958 ms
 6  72-254-0-1.client.stsn.net (72.254.0.1)  11.563 ms  12.283 ms  15.622 ms
 7  206.112.96.178 (206.112.96.178)  11.334 ms  10.764 ms  13.055 ms
 8  63.66.208.221 (63.66.208.221)  12.194 ms  14.915 ms  15.028 ms
 9  sc0.ar1.sjc5.web.uu.net (63.66.208.21)  32.809 ms  12.049 ms  10.935 ms
10  0.so-3-0-0.xl2.sjc5.alter.net (152.63.49.58)  10.811 ms  10.897 ms  10.336 ms
11  150.ATM4-0.XR1.SJC2.ALTER.NET (152.63.48.2)  18.823 ms  13.592 ms  14.346 ms
12  0.so-7-0-0.br1.sjc7.alter.net (152.63.48.253)  17.073 ms  16.109 ms  21.148 ms
13  oc192-7-1-0.edge6.sanjose1.level3.net (4.68.63.141)  12.819 ms  13.318 ms 16.430 ms
14  vlan79.csw2.sanjose1.level3.net (4.68.18.126)  15.149 ms
    vlan69.csw1.sanjose1.level3.net (4.68.18.62)  15.668 ms
15  ae-81-81.ebr1. level3.net (4.69.134.201)  15.655 ms ae-61-61 13.229 ms
16  ae-4-4.car2.level3.net (4.69.132.157)  210.512 ms  182.308 ms  41.987 ms
17  ae-11-11.car1.level3.net (4.69.132.149)  15.889 ms  31.446 ms  16.157 ms
18  apple-compu.car1.level3.net (64.158.148.6)  18.413 ms !X *  21.754 ms !X

Firewalls

A firewall is a device or software that is designed to inspect traffic and permit, deny, or proxy it. A firewall can be a dedicated appliance or software running on a host operating system. Firewalls function in a networked environment to prevent specified types of communication, filtering the traffic you want to be able to receive from the traffic you do not want to receive. Mac OS X has a built-in software firewall that you can use to limit incoming traffic. This will allow you to control traffic in a way that keeps attacks at a minimum. In Chapter 11 we discuss the software firewall in more depth.

Many firewalls will help reduce the likelihood of Denial of Service attacks against one of your computers. However, some firewalls are susceptible to these attacks themselves, opening your environment to the threat of not being able to do business. To help with this, most firewalls support the ability to have a fail-over firewall, which can automatically become the active firewall in situations where the main firewall goes down.

Port Management

Since the introduction of malware and spyware, it is becoming more common to restrict incoming and outgoing access on commonly used (and abused) ports, such as port 25. For example, if you don’t need mail services in your environment (perhaps because e-mail is hosted elsewhere), then it is likely that you will want to eliminate outgoing SMTP traffic from passing through your router. If you’re not hosting mail internally, you will also want to make sure that all inbound mail-related traffic (SMTP, as well as POP and IMAP) is being denied as well.

As discussed in previous chapters, most savvy network administrators will also restrict incoming access to their networks to all but a select number of ports, and for good reason. Many older protocols, such as FTP, are inherently insecure, or there are weak implementations of these protocols that should not be accessible from the outside. Restricting access is the primary job of most firewalls and is often called access control. When looking into configuring the access controls on your firewall, keep in mind that every open port is a security risk, and each one needs to be treated as such. Allow incoming access only for services that are required.

Properly securing your backbone and perimeter will greatly reduce the likelihood of a successful attack. For example, many root kits will attempt to establish an outgoing connection over a certain port to an attacker’s computer. If this connection cannot be established, the root kit is less likely to cause harm in your environment. Therefore, it is important to restrict outgoing as well as incoming access.

Note  We discuss root kits in more detail in Chapter 8.

Tip  Keep in mind that port management is a common task, and an administrator’s network management time should be allocated to this vital aspect of network security. Users on a network will frequently ask for certain ports to be opened that are not standard for many environments. Understand that this type of request is common across many networks, and each request should be considered very carefully.

DMZs and Subnets

A demilitarized zone (DMZ) is a perimeter network, or a network area that sits outside an organization’s internal network. A DMZ is used to hold public-facing servers that need to be accessible to the Internet and are therefore more likely to face attacks. The purpose of this design is to mitigate the damage should one of these hosts be compromised. Important or sensitive information should never be kept in a DMZ. On consumer-grade routers, a DMZ typically refers to an address to which all suspect traffic is forwarded. In home environments, the DMZ is often configured incorrectly. We often find that it is used to forward all traffic to a specific address, rather than researching which ports need to be accessible for each service and only forwarding that traffic to the DMZ. This becomes a big security threat to the computer or network device that has all the traffic forwarded to it.

Whether you choose to use a DMZ depends not on the size of your company but on whether you are using protocols that you think might easily be compromised. For example, FTP is not a secure protocol. Relegating the use of FTP in your environment to a system that lives outside your local network would prevent standard FTP attacks from affecting the entire network infrastructure.

Some administrators might choose to use a second subnet instead of a DMZ to keep certain types of traffic separate from the primary network. Subnetting an IP network allows a single large network to be broken down into what appears (logically) to be several smaller ones. Devices on the same subnet have the same subnet mask, a number that determines which part of an address signifies the network range and which signifies the hosts in that range.

Rather than allowing your users to see one another, you could put them on separate subnets and not allow UDP traffic to pass across the various subnets implemented in your environment. It is even possible to split a network into multiple subnets. Keep in mind, however, that the more complex the subnetting gets, the more difficult it becomes to troubleshoot problems on the network. For example, when a wireless network is introduced into an environment, and a separate wireless subnet is created, wireless users may have difficulty automatically finding printers on the main network. The printers will need to be manually installed using their IP address.

Spoofing

When access controls are configured based on IP addresses instead of network security policies, access attacks can occur. Spoofing, or the act of masquerading on a network with a valid IP address that was not legitimately given, is one of those access attacks and is a common way for attackers to establish access. To spoof an IP or MAC address, an attacker need only discover the MAC address or IP address of someone they know can access a network and then change their own MAC address to the one that the network is familiar with.

Let’s take a command-line look at how to change your MAC address. First run an ifconfig command to get your current MAC address. Then use the lladdr option of ifconfig to change your MAC address slightly:

cedge:/Users/cedge root# ifconfig en0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        ether 00:17:f2:2a:66:12
cedge:/Users/cedge root# sudo ifconfig en0 lladdr 00:17:f2:2a:66:21
cedge:/Users/cedge root# ifconfig en0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
        ether 00:17:f2:2a:66:21

If any application was blocking traffic from your machine based on its MAC address, the traffic will now be allowed. Once a machine on a network changes its MAC address, other machines will see this change and issue a line similar to the following:

kernel: arp: 192.168.55.108 moved from 00:17:f2:2a:66:12 to 00:17:f2:2a:66:21

        on eth0

Armed with this information, you can now set up a scanner in your logs to be notified when this line appears and then investigate all changes of MAC addresses. One way to get around this type of attack is by redirecting the access to other sentry machines that are set up to monitor these kinds of spoofing attacks. This is a deceptive active response to fool attackers into thinking attacks are succeeding, allowing an administrator to monitor what the attacker is trying to do. If you are running Snort (discussed further in Chapter 17), then your system should notice the MAC spoof and disable communications from that host automatically, thus defending you against an attack from a spoofed IP address.

Stateful Packet Inspection

Using stateful packet inspection (SPI), a firewall appliance holds the significant attributes of each connection in memory. These attributes, collectively known as the state of the connection, include such details as the IP addresses and ports involved in the connection and the sequence of packets traversing it. The most CPU-intensive checking is performed when the connection starts. All packets after that (for that session) are processed rapidly because it is simple and fast to determine whether they belong to an existing, prescreened session. Once the session has ended, its entry in the state table is discarded.

Most modern firewalls, including those in some Linksys and Netgear routers found at your local consumer electronics store, have basic SPI features, as does the Mac OS X Snow Leopard software firewall. Consumer-grade appliances have a limited amount of memory and cannot inspect as many packets as rapidly or as closely as a more advanced device, such as some CheckPoints, SonicWalls, or Ciscos. Typically, SPI on these firewalls will check only the source of the packet against the source defined in the header.

Deep packet inspection (DPI) is a subclass of SPI that examines the data portion of a packet and searches for protocol noncompliance, or some predefined pattern, in order to decide whether the packet is allowed to pass. This is in contrast to the simple packet inspection found in stateless firewalls, in which only the header portion of a packet is checked. DPI classifies traffic based on a signature database (as does SPI) and will allow you to redirect, mark, block, rate-limit, and of course report based on the classification.

Many DPI devices, rather than simply relying on signature-based detection, can also identify patterns of potentially malicious traffic in the flow of traffic. This allows devices to detect newer attacks rather than react to predefined attacks, providing for a more secure network. If your environment has the budget to acquire a firewall that performs deep packet inspection, you should strongly consider adding one to your network. For the security it provides, it is well worth the investment.

Data Packet Encryption

When two computers on different networks are communicating, they are often sending packets across multiple routers, allowing traffic to be susceptible to a variety of security holes at each stop along the way. Even with good inspection on a firewall, an attacker can still perpetrate a man-in-the-middle attack, an attack in which someone spoofs a trusted host while sitting between your server and a user accessing your server. A man-in-the-middle attack is designed to intercept all the traffic between two points, either to eavesdrop or to insert malicious traffic. To keep prying eyes off your data, it is important to implement encryption techniques on your communications, rendering the data unreadable to the interceptor. If your data are passing from your home to your office, for example, you would implement a VPN. If you are taking customer data over web sites, then you might consider using SSL. We discuss using VPN and SSL further in Chapter 15.

Understanding Switches and Hubs

Hubs are dummy devices that connect multiple computers, making them act as a single segment on a network. With a hub, only one device can successfully transmit data at a time. When two computers submit data at the same time, a collision occurs, and a jam signal is sent to all the ports when collisions are detected. This makes one computer able to cause collisions and force an entire network to slow down while packets that were jammed are re-sent by all the computers that attempted to communicate during the jam. Hubs will also allow any computer to see the packets sent by all other computers on the hub.

Switches are more advanced than hubs and provide expandability, allowing more switches, ports, and computers to exist on a network. Switches perform collision detection and isolate traffic between the source of a packet and its destination. Because each computer is not automatically able to see all the traffic from other computers, this is a more secure communications environment. Switches are less likely to become flooded with collisions and offer faster throughput and lower latency.

We advise against the use of hubs as a general rule, unless you have a very explicit reason to use them, because they can act as potential collision centers and cause security breaches. However, hubs do still have limited usefulness in networks. Switches respond to loops, but hubs do not. When a cable is plugged into a switch twice, it can cause unwanted network traffic. In areas where many users are plugging in their laptops, a cable can get plugged back into a switch by accident, and some network administrators will use a hub to keep this from occurring. Additionally, protocol analyzers connected to switches do not always receive all the desired packets, because the switch separates the ports into different segments. Connecting a protocol analyzer to a hub will allow it to see all the traffic on the network segment. Finally, some cluster environments require each computer to receive all the traffic going to the cluster. In these situations, hubs will most likely be more appropriate than switches.

Note  Many managed switches can be configured to act as though they are hubs, so you can get the capabilities you require while maintaining flexibility for more advanced features. However, this can be a security risk; and so when it’s convenient you should disable remote configuration on switches.

Stacked switches are switches designed to accommodate multiple switches in a network. When a switch is stackable, it will have dedicated ports for adding more switches that allow speeds of 10 Gb or faster between the switches, using special stacking cables. These are often converted into fiber connections so that latency is optimized over long distances.

Managed Switches

As networks and features of networks have grown, managed switches have become more popular. Managed switches can control internal network traffic and are used to split a network into logical segments, giving more granular control over network traffic and providing more advanced error detection. Managed switches also offer more advanced logging features to help network administrators isolate problem areas. Some managed switches are also stacked, although not all of them are capable of stacking.

A standard feature to look for on a managed switch is VLAN support. VLAN, short for virtual LAN, describes a network of computers that behave as if they are connected to the same wire, even though they may actually be physically located on different segments of a LAN. They are configured through software rather than hardware, which makes them extremely flexible. One of the biggest advantages of VLANs is their portability. Computers are able to stay on the same VLAN without any hardware reconfiguration when physically moved to another location. This also works the other way; one physical LAN can be split into multiple logical networks by the VLAN software running on switches. This can be useful when implementing a DMZ, as you can isolate your DMZ traffic without actually creating an isolated physical network. Nearly all managed switches have a VLAN feature set.

Newer and more advanced switches also have the capability to perform rogue access point detection, or detection of unwanted access points and routers on a network. Since Apple joined the ranks of operating system vendors that have introduced Internet Sharing as a built-in feature, many networks have been brought to a grinding halt by rogue routers providing IP addresses to networks. Problems with rogue access points have been especially common in networks with large numbers of freelancers who bring their laptops into the office and connect to the wireless network without turning off the Internet Sharing that they were using at home. This establishes an ad-hoc Denial of Service to the rest of the network, because they receive bad DHCP leases with bad TCP/IP settings. This situation can require administrators to comb through every machine on a network to isolate which user has enabled the Internet Sharing features on their computers. Rogue access point detection is also helpful for making sure that random users on networks do not plug in wireless access points or routers they may think are switches. Rogue access points are discussed in further detail in Chapter 12.

Most managed switches also provide some form of MAC address filtering. A MAC address is a unique identifier attached to most forms of networking equipment (you can find your Mac’s MAC address in the Network pane of System Preferences by clicking the Advanced button). With MAC filtering, a network administrator can define a destination address so that packets can be received only from a specific port and allow only those same packets to be forwarded to another port. Using MAC address filtering, only users who are connected to port A can access the server connected to port B; packets from other ports, even packets whose destination address is the server on port B, will be dropped. MAC filtering is also referred to as network access control, although this could refer to port filtration rather than MAC filtration.

Here are some other features of managed switches:

  • PoE: Power over Ethernet allows power to be supplied to network devices over an Ethernet cable, rather than over a power adapter.
  • Spanning tree: This closes loops on networks. If more than one open path between any two ports were to be active at once (a loop), then a broadcast storm, or large amount of network traffic, could cause the network to become unstable.
  • Priority tagging: This specifies ports that are of a higher priority, allowing mission-critical traffic to be differentiated from traffic that’s not.
  • Link aggregation: This uses multiple network ports in parallel to increase the link speed beyond the limits of any one single cable or port. Link aggregation, also known as teaming, is based on the IEEE 802.3ad link aggregation standard.
  • Flow control: This manages traffic rates between two computers on a switched network. It is not always possible for two computers to communicate at the same speed. Flow control throttles speeds for faster systems by pausing traffic when it is running too fast.

Using managed switches historically meant that large portions of an IT budget would need to be spent on acquiring them. However, with the increased number of manufacturers now involved in developing managed switches, that is no longer the case. Managed Netgear and D-Link switches (such as the one featured in Figure 10-3) provide many of the advanced features found in Cisco and other top-of-the-line switches for a fraction of the cost. This has made them increasingly popular. Features offered on D-Link and Netgear switches include link aggregation, flow control, network access control, spanning tree, and priority tags.

9781484217115_Fig10-03.jpg

Figure 10-3. D-Link 48 port managed switch

Many administrators of Mac environments are not comfortable deploying managed switches in their environments, because they are typically command-line-only configurations, and some of the protocols they use can be incompatible with other devices on the network. To address this concern, Apple has recently begun to align with industry network standards, enabling Mac network administrators to become more comfortable using managed switches to support extended features of Mac hardware products. One example of this is the use of link aggregation (using two network interfaces as one) on Mac servers, a feature that requires a managed switch in order to be configured properly.

Restricting Network Services

Network services are the building blocks of many network environments. Connecting to the Internet, DNS, DHCP, and other protocols is the main reason for having a network in the first place. One of the best ways to ensure security for file sharing, web services, and mail services is to limit the access that computers have to them. Some computers may need access to these resources; denying them can be detrimental to the workflow. Others may not need access, and giving them access could be potentially damaging. For example, you might allow users local to your network to access your file server but will probably never want to allow access to the file server for users outside your network.

When architecting a network, you need to handle each service separately. Analyze which services go in and out of every system in an environment. If protocols will be accessed only from inside the network, such as file sharing and directory service protocols, then they should not be routable. Restricting access to protocols to users outside your network can be handled using the firewall, as mentioned earlier in this chapter. For larger environments, restricting access to services from other computers within your network is often handled using the switches in your environment. Draft a document, such as the one in Figure 10-4, that lists the servers in your environment, the services they will be providing, and which ports they run on. This can help tremendously when trying to secure all the services needed in a networked environment while maintaining their usability.

9781484217115_Fig10-04.jpg

Figure 10-4. Servers, services, and ports

For smaller environments, restricting access between computers is usually handled on a per-service basis by using a local firewall running on the computer providing the service. For more information on configuring the software firewall in Mac OS X, see Chapter 11. For firewall configuration for Mac OS X Server, see Chapter 16.

Security Through 802.1x

802.1x is a protocol that is fully supported in OS X. The 802.1x standard can greatly increase the security of a network environment by requiring users to authenticate before they can access the local network for Ethernet or wireless networks. The 802.1x standard can use a third-party authentication authority, such as Open Directory or Active Directory, or you can use preshared keys. Authentication to the network rather than just the computer is a fairly new concept for most Mac environments. This level of advanced networking is fairly complex and must be given an appropriate level of planning.

Enabling 802.1x is accomplished by deploying a profile. These profiles can easily be created in Apple Configurator (Figure 10-5) or Profile Manager. You can then view 802.1x settings by opening System Preferences, clicking the Network pane, choosing the appropriate interface, and then clicking the Advanced button. At this point, you can click the 802.1x tab (Figure 10-5) and select the network to join.

9781484217115_Fig10-05.jpg

Figure 10-5. Setting up 802.1x

Once you have chosen your network, click the authentication protocol you want to use in the Authentication section, and then click Configure. This will allow you to configure settings for the specific protocol to match the settings of your server.

There are some serious vulnerabilities in the 802.1x protocol. Most significantly, it authenticates only at the beginning of the connection. For example, after authentication is successful and the connection is established, it’s possible for an attacker to hijack the authenticated port by getting in between the authenticated computer and the port. As discussed earlier, this is called a man-in-the-middle attack.

Proxy Servers

One way of filtering traffic on your network is through the use of a proxy server. A proxy server allows services to establish network connectivity using one server as a sort of traffic cop. The proxy server is situated between computers and the Internet, and processes requests for external resources on behalf of the users of the network (see Figure 10-6). Using proxy servers, administrators can prevent users from viewing predetermined web sites. In addition to increasing security, proxy servers can improve network performance by allowing multiple users to access data that is saved in the proxy’s cache. On the first access of this data, there will be a slight performance loss. Any subsequent attempts to visit the site or access the data will see a performance gain, since the content can be served from the proxy. Therefore, proxy servers accelerate access only to content that is accessed repeatedly.

9781484217115_Fig10-06.jpg

Figure 10-6. Proxy server network configuration

Proxies themselves can be exploited as a means to forward malicious HTTP traffic, such as web-based spam e-mail referrals. When setting up a proxy, it is important to take into account security concerns, such as which clients on the network will have access to the proxy. Often proxy servers have a whitelist (a list of allowed addresses) that can be used to allow access to IP addresses individually or by subnet. This should be configured to allow only those machines on the local area network to access the proxy services.

If external access is required, consider running the proxy on a nonstandard port and requiring authentication, perhaps utilizing an existing directory service, such as Active Directory or Open Directory. This, combined with strong password policies, should help protect against unauthorized access to the proxy.

Squid

Squid is an open source product that allows network administrators to configure proxy services easily. It has a robust set of access control options that can be configured to allow or deny access based on user or group access as well as other criteria such as scheduling proxy servers to be enabled at certain periods of time. SquidMan is a utility developed by Tony Gray to provide Mac users with a GUI to assist with installing and configuring a precompiled version of Squid.

To install SquidMan, follow these steps:

  1. Download the installer from http://web.me.com/adg/squidman/index.html, and extract the SquidMan .dmg file.
  2. Copy SquidMan.app into your Applications folder.
  3. Open SquidMan, and enter an administrative password to install the Squid components and run the application.
  4. At the preferences screen, enter the appropriate settings for the following fields (see Figure 10-7):
    • HTTP Port: The port that client computers will use to access the proxy. This defaults to 8080.
    • Visible hostname: Name when viewed from client systems
    • Cache Size: The amount of space used to store data in the cache.
    • Maximum Object Size: The size limit for files that are to be cached by the proxy.
    • Rotate Logs: When to rotate log files.
    • Start Squid on Launch After a _ Second Delay: Enabling this option will automatically start Squid when SquidMan is launched and enable you to define a delay after launching SquidMan to start the Squid services.
    • Quit Squid on Logout: This allows you to determine if you’d like to keep Squid running when the active user logs out of your proxy server.
    • Show Errors Produced by Squid: This prompts users with pop-up windows when errors occur. This is helpful if someone is sitting at the desktop of the proxy server when it’s running. However, if the desktop of the system is never looked at, then you will probably refer to logs to discover errors.

    9781484217115_Fig10-07.jpg

    Figure 10-7. SquidMan preferences: changing ports and hostnames

  5. Click the Parent tab, and use this location to choose whether you will have your Squid server use another Squid server as its proxy server. This can make troubleshooting difficult in proxy environments. Keep the defaults here unless you have multiple Squid servers.
  6. Click the Clients tab and enter the appropriate IP addresses in the Provide Proxy Services For field. When entering IP address ranges, you will need to also enter the subnet for the IP range.
  7. Use the Direct tab to configure any exclusions to the list of domains that will be proxied by the server. This will be helpful when troubleshooting a parent proxy environment.
  8. Click the Template tab, and use this location to edit the Squid configuration file manually. Here, the maximum object size and cache directories can be increased beyond the variables available in the GUI.
  9. Once the settings have been configured for SquidMan, use the Start Squid button of the main SquidMan screen to start Squid (see Figure 10-8).

    9781484217115_Fig10-08.jpg

    Figure 10-8. Starting SquidMan

Once SquidMan is running, you can stop it by clicking the Stop Squid button on the main SquidMan screen. You can get more granular control over the Squid proxy services via command-line administration by editing the settings of the squid.conf file located in /User/<username>/Library/Preferences/squid.conf once SquidMan is launched for the first time.

Note  The help files for SquidMan are very thorough in their explanations of these settings.

Summary

Layers of security breed resilient networks. When securing networks, layer your approach. Start from the center, the computer itself, and move outward, looking at which services should be accessible by which computers. Then, layer the security levels by grouping the computers and building policies to limit access on each of those groups. Determine your firewall policies, both internally and externally. Consider your network’s physical layer as you implement security policies based on location within the network. This kind of layered approach gives strength to your network’s security blueprint.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset