Chapter 19. Apache Security

 

Probably the last man who knew how it worked had been tortured to death years before. Or as soon as it was installed. Killing the creator was a traditional method of patent protection.

 
 --Small Gods—Terry Pratchett

One of the strengths of Apache is that its developers are very security conscious. Open source projects are sometimes criticized for having too many security holes. Amazingly, the opposite appears to be true with Web servers. In September of 2001 the Gartner Group, a research organization, recommended that companies switch from Microsoft's Internet Information Server to Apache, among other Web servers, because there are fewer security risks (http://www.gartner.com/DisplayDocument?id=340962).

At times this chapter might seem excessively draconian in its recommendations. To quote Andy Grove, co-founder of Intel, “Just because you are paranoid does not mean they are out to get you.” When it comes to security you can never be too paranoid. Implementing many of the suggestions in this chapter might cause riots among your users, and that is fine, you can use that as a divining rod for measuring the success of your security policy. The sweet spot in security is somewhere between several flaming e-mails from users and the users lined up outside of your office door with voodoo dolls and pitchforks. Unfortunately, in this world of script kiddies and hacker wars, a restrictive security policy is necessary.

When discussing Apache security, there are four areas you need to think about:

  • The Apache program

  • The external security risks

  • The internal security risks

  • The vendor security issues

The source code is probably the least of your worries. The Apache source code is tested and retested for security holes and potential security holes. As of this writing, the last time a security hole was found in the source code was in 1998.

External security risks are problems that arise from someone attacking your server. These problems can range from a denial of service attack, to someone trying to exploit a security hole in a piece of software you have installed. They are best dealt with as part of a broader security strategy, which will be discussed later in this chapter.

Internal security risks are by far the biggest problem that Apache administrators face. Generally, these are not attacks, as much as misconfigured CGI scripts, poorly written modules, and other issues that can cause a server to crash, or worse, leave your valuable data exposed.

Vendor security issues are another big problem. When you purchase or download an operating system that includes Apache as part of the base installation, you do not know what configuration changes the vendor has made to Apache. You also do not know what type of security bugs might have been introduced during the installation process. Most vendors are good about posting updates, but it is important to stay abreast of any security holes that the vendor reports.

The focus of this, and the next, chapter is external and internal security problems. These two security problems are the ones you have the greatest control over, and the ones you can most easily prevent from turning into full-fledged crises. They are also the easiest problems to prevent. As with any other security issue, it simply requires careful planning.

A Web server presents a unique security challenge that almost no other networked server faces. A Web server needs to be accessible to anyone on the Internet, yet it needs to be protected from potential damage that can be inflicted by one of these remote users.

To better understand the challenges faced in securing a Web server contrast the security policies needed for a Web server versus a mail server. A mail server is similar to a Web server in that it needs to be publicly accessible; otherwise you will not be able to receive mail. It is also a potential target of attacks because other people can use a mail server to send Unsolicited Commercial E-mail (UCE), more commonly known as spam, to thousands or even millions of people.

So, a mail server administrator is left with this problem—keep the mail server publicly available, but don't allow anyone, except for trusted users, to send mail through it. The solution to this problem is relatively simple. A mail server administrator can create an access list of hosts that are allowed to send mail, or relay, through the server. The mail server stays public, but only trusted users can actually relay mail.

Unfortunately, the same sort of panacea does not exist for Web servers. As we will discuss in this chapter, the majority of the security problems associated with Web servers are caused by the fact that the servers have to provide access to everyone.

Developing a Security Strategy

A Web server does not operate in a vacuum. It is an integral part of your business, and your network. Therefore, a security strategy for your Web server has to include discussion of a broader network and server strategy.

The best way to develop a security strategy is from the outside in. The strategy has to include the network, the server, the operating system, Apache itself, and finally the individual Web sites. This is outlined in Figure 19.1.

A security strategy has to involve the entire network.

Figure 19.1. A security strategy has to involve the entire network.

The first security decision you need to make is whether you want to host your server in house, or collocate it with a hosting vendor. From a security perspective hosting a server in house forces you to add a layer of complexity to your network, which we will discuss later in this chapter. However, collocating your server in a remote data center means that you will lose a level of control because you will not be able to administer or monitor the remote data center networking devices. The in house versus collocation debate is discussed in greater detail in Chapter 1, “Getting Started.”

If you decide to host the site on an internal network you will need to create a Demilitarized Zone (DMZ) for the server. A DMZ is a firewall term used to describe an area between the Internet and the protected portion of your network. The DMZ is still part of your network, but it is more open than the rest of the network, with a less restrictive ruleset, and, therefore, more vulnerable to attacks.

If you have a database driven Web site, especially one with confidential customer information, another layer of complexity is added to your network. Obviously, your database server has to have very strict access restrictions, which means it will have to reside the behind firewall, but it still has to be reachable by the Web server. Your firewall will have to be configured to allow the Web server access. It is also a good idea, whenever possible, to encrypt the transactions between the Web server and the database server, providing your customers with the maximum amount of protection.

The next decision you need to make is what services, in addition to HTTPD, you are going to run on that server. The temptation, especially for smaller companies, is to run as many services as possible on a single server. If possible, this is something you should resist. Adding additional services, such as mail, DNS, and NNTP to a server increases the number of open ports, and, therefore, increases the potential for security holes to be found.

You might be asking yourself why it is so important to separate services. After all, if someone were going to break into the server why would it matter if there were one service or many?

The answer is simple. The most common form of external attack is called a root exploit. A root exploit is an attack where a remote user takes advantage of a security hole in a program and uses it to gain root access to a server. If an attacker were to take advantage of a root exploit in Sendmail, a very common mail program, that attacker would then have the ability to deface your Web site, or access your customer database. Conversely, if an attacker were to find a root exploit in Apache she would now be able to read mail stored on the server, modify DNS zone files, and more.

One of the most common internal security problems is poor programming. A runaway CGI script, or a poorly written program, can consume all the memory on a server and render it virtually useless. Obviously, you do not want a poorly written CGI script to knock your mail and DNS servers offline. Nor would you want to have a Sendmail module develop a memory leak and crash your server.

By separating your services you are providing an extra level of protection for your network, if one server, or service, is compromised it does not impact other services.

After deciding on a server, and where to locate it, the next step is to secure the operating system. Although securing an operating system is beyond the scope of this book, there are several excellent resources detailing ways to protect your operating systems. One title that is particularly comprehensive is Sams Maximum Security, Third Edition.

There are, however, some general guidelines for securing any Web server. To start, disable any ports that are not in use. In this case, a port is shorthand for port number, which is a numerical value mapped to an application on the server. The server is always listening on open, or active, ports. When it receives a packet with that port number it activates the proper application. By disabling unused ports you will decrease the likelihood that someone performing a port scan against your server will discover a security hole you were not aware existed.

It is also useful to remove any unnecessary user accounts from your server. For instance, many flavors of *nix include a games account. The account is used to record high scores for games on a shared user system. Hopefully you will not be using your Web server to play Unreal Tournament, so it can be deleted.

It is important to be as restrictive as possible with system files. Any file that could cause irreparable damage to the server, or worse someone else's server, should have the tightest security permissions and only the root or Administrator account should have access to it.

Finally, don't use telnet. This was touched on briefly in Chapter 1, but it bears repeating. Sending passwords over the Internet in clear text is a very bad thing, especially if it is your root password. Both telnet and FTP send passwords in clear text, and should be avoided. SSH provides you all the benefits that telnet does, and it encrypts your connection.

This chapter has moved from the general to the specific. Starting with the location of the server and migrating to the server configuration. The rest of the chapter focuses on operating system and Apache security. These are the aspects of your Web server over which you will exert the most control and that deserve the most attention.

Understanding *nix File Permissions

To fully understand how to secure the Apache server, and individual sites, it is necessary to understand how *nix file permissions work.

The best way to visualize *nix file permissions is as a matrix. Along the side of the matrix are the three levels of access to the file:

  • Read—able to view the file

  • Write—able to make changes to the file

  • Execute—able to run the file as a program

At the top of the matrix are the three users or groups that might want access to the file:

  • Owner—the user who owns the file

  • Group—the group to which the owner of the file belongs

  • World—everyone else

*nix file permissions are set by assigning a value of read, write, or execute to each of the three groups. This is displayed from the command line by requesting a long file list:

 [allan@ns1 conf]$ ls -l
-rw-r—r—    1 root     root          348 Oct 18  2000 access.conf
-rw-r—r—    1 root     root        46467 Sep 29 23:56 httpd.conf
-rw-r—r—    1 root     root          357 Oct 18  2000 srm.conf

Setting file permissions is a matter of arithmetic. Each permission type is assigned a numerical value. Read permission has a value of 4, write permission is assigned a value of 2, and execute permission has a value of 1. 0 indicates no permission to access the file. To combine permissions add the numbers together. For example, to assign the read and write permissions, for the file owner, to the access.conf file you would add 4 and 2 together for a permission value of 6.

Permission settings are set using the chmod command. So, to assign read and write permissions to the file owner, read only permissions to the group, and read only permissions to other users for the access.conf file you would type the command

[allan@ns1 conf]$ chmod 644 access.conf

Similarly, if you wanted to set the permissions so that all users had read, write, and execute permissions (never a good idea) you would use the command

[allan@ns1 conf]$ chmod 777 access.conf

As an administrator of a Web server you should strive to enforce 644 permission settings for all text, non-CGI, files. There is no reason to have these files with execute permissions. You also do not want to risk having the files overwritten by someone who does not have permission.

The only exception to this rule are files updated by multiple users in the same group (we will discuss groups further in the next section). In cases where multiple users need to update a file, make sure all the users are in the same group and set the permissions to 664.

As with any security concerns, you should always err on the side of caution when setting file permissions. If you need to loosen the file permissions to make the site usable, it is a simple matter to adjust them. It is better to be cautious then to have to recover from an attack that was expedited because of inappropriate file permissions.

Users and Groups

*nix and Windows both use “users” and “groups” to track file ownership, and limit the capability to access critical operating system files. A user is an individual login, whereas a group is one or several users that have the same level of access to the server.

Let's take a look at the long file list we used earlier:

 [allan@ns1 conf]$ ls -l
-rw-r—r—    1 root     root          348 Oct 18  2000 access.conf
-rw-r—r—    1 root     root        46467 Sep 29 23:56 httpd.conf
-rw-r—r—    1 root     root          357 Oct 18  2000 srm.conf

The third column is the username that owns the file, and the fourth column is the group to which the user belongs. In the previous listing the access.conf file is owned by the user root and the group root.

For some flavors of *nix the default setting for a new user is to create a group with the same name. This provides some level of protection because it prevents users from one group from editing files owned by someone in another group, assuming good permission settings are practiced. However, if you have multiple users that need to have access to the same set of files, it is better to put those users in the same group. This allows all users in the group to edit the file and upload it to the server.

Another level of user and group security you can enhance is restricting access to which users can become root.

Unless your version of *nix ships with enhanced security settings, any user who finds the root password can become root and make changes.

You can prevent this by only allowing certain users to become root. The settings for the su command are stored in the file /etc/pam.d/su. You can edit the file to only allow users from the “wheel” group to become root. Depending on your operating system, there are several ways to do this. For example, if you were running RedHat 7.1, you would type the following:

auth       sufficient   /lib/security/pam_wheel.so trust use_uid
auth       required     /lib/security/pam_wheel.so use_uid

Wheel is a special *nix group that is designed specifically for the purpose of creating a special group of administrative users.

Superuser do (sudo) is another option for granting limited privileges to certain users. Sudo is maintained by Todd Miller and can be downloaded from the sudo Web site http://www.courtesan.come/sudo/.

In a traditional su environment a user who becomes root has the same privileges as root, and all commands are logged as if a single root user performed them. Sudo allows a server administrator to restrict access to certain commands. When a user becomes root, a sudo shell is invoked, as opposed to the normal root shell. The user can only execute the allowed commands, and all information is logged. If something goes wrong, the server administrator can quickly determine what happened, and hopefully, how to fix it.

Sudo also limits the idle time in a session to five minutes before timing out. If the five-minute idle limit is reached, the user is forcibly logged out, and has to restart the session.

The Apache User

Apache is shipped with fairly tight security measures. However, if you are using a preinstalled binary, it is a good idea to double-check some of the security permissions.

It is especially important to know which user is running the Apache process. There are several ways to do this. You can look in the httpd.conf file for the User/Group variables; another way is to run the top command. If the Apache httpd executable is running you should see an entry similar to this:

PID     USER      PRI  NI SIZE RSS   SHARE STAT  %CPU %MEM  TIME COMMAND
22071 apache    14   0  9680 9680  7812  S     3.5  1.8   0:01 httpd

As you can see on this system, Apache is running as the apache user. Apache should always run as an unprivileged user, usually apache or nobody.

If you notice that your server is running Apache as root, you should definitely change to a different user, one that has no other function on the server.

Although the Apache process should be run as an unprivileged user, the configuration files for the Apache should be well protected. As in the previous example, the files should be owned by root, and have as many restrictive permissions as possible.

User Permissions

Depending on the type of server you are running you will either have multiple users updating files on the same Web site, or multiple Web sites, each with a different user.

There are many levels of access you can assign to users depending on what role they will be playing in terms of updating the Web site. Of course, before determining the role a user has in developing the Web site, the first thing that needs to be determined is the level of access a user has to the server.

In Chapter 1, the four main types of access to a Web server were discussed—Telnet, FTP, SSH, and SCP. Telnet and SSH are redundant as are FTP and SCP. In an ideal world you would not want to use either Telnet or FTP because both programs send passwords in clear text. Unfortunately, using SCP can be confusing to some people. If you do have control over how users access the servers, such as in an enterprise environment, then you should consider permitting primarily SCP access and enabling SSH only when necessary, and never enabling Telnet or FTP access.

You might be wondering how you can grant access to SCP and not SSH because SCP runs over a SSH tunnel. There are a couple of ways to do this, but the most common is to change the login shell for the SCP-only user, so that it will only run the SCP program. The user still makes a connection using SSH, but the only program she can run is SCP.

Limiting a user to a SCP-only connection enables you to limit the number of people who are making shell connections to the server, and making file changes directly on the server. It also limits the number of mistakenly deleted or accidentally overwritten files that you will have to restore from backup. Presumably, if a user is forced to upload files, that user will have local copies of all the files that are being changed.

If you do allow files to be changed and modified directly on the server, you should consider using a version control system, such as rcsedit. A version control system requires users to check out files, edit them, summarize the changes made, and then check them back in. This type of version control has two advantages. The first is it keeps track of who edited a file, so if a mistake is made you will know who did it. The second advantage is that an archive of the file is kept. If something goes horribly wrong with the file, it can quickly be reverted to an earlier version.

In addition to access and version control, you also have to worry about user permissions. If you are not going to personally monitor all the files on a server, you should be as restrictive as possible with file permissions.

The best permissions for non-CGI files on a Web server are 644. Again, this enables the user to read and write to the file, but no one else to write to it. CGI scripts, which generally have permissions of 755, should all be quartered in a special directory. Although it is useful to be able to have executable files in all directories, it is not very secure.

If you are very paranoid about security permissions, you can create a cron job that will check file permissions nightly and save a list of files in violation of your permissions to a file. You can then either investigate the files or change the permissions yourself.

A cron job is a task that an administrator tells to perform at regularly scheduled intervals, ranging from once a minute to once a month.

The cron job would use the find command to search for permissions in the root directory of the Web site or sites. Assume the content of your Web site is stored in /home/website/httpd/. The find command you ran would look something like this:

find /home/website/httpd -type f -perm -ga=wx > /root/badfiles.txt

This command looks for all files that are writable and executable by the group and everyone in the /home/website/httpd directory, as well as its subdirectories. If it finds a match, it will write the filename to a file called badfiles.txt.

You can make your life even easier by making the change automatically, like this:

find /home/website/httpd -type f -perm -ga=wx  | xargs chmod ga-wx

This will automatically remove write and execute permissions from the files that are found in violation of your permissions policy. Be warned, this might cause angry users to call you and tell you their site no longer works.

Another solution is to install a program that will monitor these changes for you. Tripwire (http://www.tripwire.com and http://www.tripwire.org) is a program that runs on *nix systems and monitors file system integrity. When you initialize Tripwire you tell it what files or directories to watch, and what changes you want to generate alerts. For example, you may want check file permissions for every file in your root web. Feed this information into Tripwire's database, it encrypts it, and checks the specified files. If it notices a problem, it alerts you, so you can take appropriate action.

Limit Modules

The modular nature of Apache is one of the reasons that it is so popular. You can install a myriad of modules, or even write your own.

From a security perspective each module presents an additional security risk.

This risk is exasperated by precompiled binaries of Apache. Most often these binaries include modules, aside from the standard Apache modules that the developer thinks you will need. Most often these are modules that incorporate Perl, mod_perl, PHP, and MySQL into Apache. That is great if you are going to use these features, but if you are not going to use them they do present a security risk, however slight it might be, to your server.

Before installing Apache, you should consider what Web site enhancements you intend to use right away, as well as enhancements you intend to use within six months, and what you plan to use in a year. If you are unsure, ask your developers what enhancements they would like to add.

Even if you do not intend to add any enhancements you will want to review the standard modules, and decide if you need them. You can view a list of currently installed modules by appending the -l flag to the httpd command, like so:

[root@ns1 allan]# /usr/sbin/httpd -l

You can also review the installed modules directly in the httpd.conf file.

To remove a module from being compiled when Apache is first started simply comment it out in the httpd.conf file. Make sure that you comment it out both in the Dynamic Shared Object Support section and the AddModule section of the httpd.conf file.

If you are not sure if you need a module, you can review what the module does on the Apache Modules Web site at http://httpd.apache.org/docs/mod/. If, after reading the description, you are still not sure if you need it leave it in rest assured that the standard modules have been repeatedly tested and used by, literally, millions of administrators, so they are fairly secure.

Do You Really Need FrontPage Extensions?

Chances are, if you run a large Web site with a lot of users, or a Web server with a lot of sites, someone will tell you they need to have FrontPage Extensions installed.

If at all possible, try to avoid doing this. FrontPage is notorious for its security holes, and although each version of the software has gotten more secure, there are still fundamental security problems with it.

The biggest problem, and one that is integral to the way FrontPage works, is that when FrontPage publishes files to your Web site it does so using an HTTP connection. As with telnet and FTP your password is sent as clear text, so it is readable by anyone with a sniffer.

Older versions of the FrontPage software stored the usernames and passwords for the site and the sub webs as plain text files within the main directory. This meant anyone who knew where to look would be able to find the password.

A FrontPage enabled Web site also uses significantly more storage space than a site that does not use FrontPage. FrontPage makes copies of all files that are part of the Web site, and stores them in private subdirectories. It also has executables that enable the file transfer, and the FrontPage bots (CGI programs that are built into FrontPage). These bots sit on the server, even if they are not being used. Again, this means that anyone who knows where the executables are kept might be able to exploit them and break into your server.

There are many other programs, such as Dreamweaver (for more information check out Sams How to Use Dreamweaver and Fireworks 4), that provide the same type of functionality as FrontPage without the associated security holes.

Cautious Server-Side Includes Usage

Server-Side Includes (SSI) are a great way to enhance a Web site. They enable users to embed CGI scripts, other files, and Unix commands into a regular HTML file. More information about SSI can be found in Chapter 16, “Server-Side Includes.”

Unfortunately, SSI can also be a nightmare to a server administrator. The same problems inherent in CGI script security are now exacerbated by the fact that the scripts can be called any non-executable document. Which means that even documents that you have secured with 644 permissions can still be used to cause problems on the server.

If you are going to use SSI, there are things you can do to lessen the potential security risks.

One of the best things you can do is only enable SSI on a per-directory basis. If a directory does not need SSI there is no reason to enable this feature. You can do this by disabling SSI for the server and selectively enabling it for certain directories.

When you do enable SSI for a directory, make sure the server-parsed documents end in shtml, or some suffix that is not htm or html, the standard HTML file extensions. SSI forces Apache to parse every file in that directory every time it is requested. If your site is filled mostly with non-SSI enabled files you are placing an unnecessary burden on Apache by forcing it to parse the non-SSI files.

In addition to limiting the location and type of files that can contain SSI executables, it is a good idea to limit what can be executed in the file.

You can limit the types of files that can be executed by adding the IncludesNOEXEC flag to the Includes directive. So, it would look something like this:

<Directory "/home/website/httpd">
    AllowOverride None
    Options IncludesNOEXEC
</Directory>

This will disable the exec command within SSI documents. You can still execute CGI scripts using the “include virtual” tag. Any scripts referenced by the “include virtual” tag will need to be in a directory defined by the ScriptAlias directive (discussed in Chapters 9, “URL Mapping,” and 15, “CGI Programs”).

Cautious .htaccess Usage

There are two different reasons for using .htaccess files. The first, and most common is to setup password protection, the second reason is to overwrite the Apache system-wide settings.

Although password protection is, technically, a way of overwriting Apache system-wide settings it is the most popular use of .htaccess and deserves special attention.

As with any optional Apache setting, you do not have to enable the use of .htaccess files.

If you do decide to enable the use of .htaccess, you should do it selectively. Start with a very restrictive level of user control:

<Directory />
AllowOverride None
Options None
Allow from all
</Directory>

This creates a default setting that is very restrictive; you can then pick individual directories that will have system override enabled.

In addition to selecting which directories will have override capabilities you will need to determine which directives you will allow to be overwritten.

There is a temptation when enabling the use of .htaccess to just enable all directives to be overwritten:

<Directory />
AllowOverride ALL
Options None
Allow from all
</Directory>

For some directives, such as ErrorDocument, which enables a user to change the default error document, this is no problem. However, other directives, such as Includes can cause problems by enabling them.

For example, if you have disabled SSI for a directory and you leave the AllowOverride directive set to all, a user will be able to enable SSI, without letting you know.

Deciding which directives you will enable is a matter of choice. However, it is something you should carefully consider, enable selectively.

Password Protection

As mentioned earlier, .htaccess is most commonly used for password protection. It is a good way of handling password protection, and it is secure.

Unfortunately, there are some fairly common practices, implemented by users, which diminish the effectiveness of this tool.

One common mistake made by users is to put the file that contains the usernames and passwords in the same directory as the .htaccess file, or in the root Web directory. This file should not be in a publicly accessible directory. Keep it in the root directory of the user, or some other directory that they have write access to, but is not accessible through the Web site.

Another common request is to use the system password file to handle authentication. This is obviously a bad idea. One of the nice features of .htaccess is that it creates virtual users, with no permissions on the server. If you query the system password file, you are taking that advantage away, and by creating more system users, some of which will undoubtedly have very simple passwords; you are increasing the security risk to the server.

Finally, make sure that your httpd.conf file has the following lines:

<Files ~ "^.ht">
    Order allow,deny
    Deny from all
</Files>

This will prevent site visitors from reading the contents of your .htaccess file. It also prevents site visitors from reading .htpasswd files, which are commonly used to store the username password combinations (of course you won't have to worry about that because your users will not have their .htpassword files in Web accessible directories).

Using a Staging Server

So far we have discussed ways you can tighten the security of your server, and of Apache in general. Although these suggestions are great in a perfect world controlled by system administrators, they are not always practical in a corporate environment.

One way to combine the security you, as an administrator, desire with the enhancements that your users demand is to setup a staging server.

A staging server is a server that is a mirror of your Web server. It has the same operating system, the same version of Apache, with all the same modules, the same version of PHP, Perl, and so on. New content is published to the staging server, tested, and then published to the live server. The staging server enables users to test code in a near production environment before making it live.

There are several advantages to running a staging server.

At a minimum forcing users to publish to a staging server enables you to close off unnecessary ports on your production server. As an example, although it might not be practical to force users to use SCP versus FTP, you can force all connections from the staging to the production server to be SCP. Which means you can disable FTP on the production server.

A staging server provides you with a testing ground for software upgrades. If you are worried that an upgrade might cause problems on your server, you can perform the upgrade on the staging server, giving yourself some room for error. If the installation is successful, perform it on the live server. Otherwise, determine what went wrong and try the installation on the staging server.

Having a staging server also enables you to lock down your server and tightly control what addresses can connect to the live server. On most variants of the *nix operating system this type of control is done through the hosts.deny and hosts.allow files. As the names suggest, the hosts.allow file determines who to allow access to selected services on the server, although the hosts.deny file lists addresses and services that are denied access to the server. The files can contain multiple lines, each one representing a different rule and the hosts.allow rules overwrite the rules in the hosts.deny file.

The hosts.allow and hosts.deny files are formatted in the same manner: service, hosts, and command. The service column is the service, or services (a comma-delimited list)—expressed as the services daemon—represented by a given rule. The hosts column is also a comma-delimited list of hosts to which the rule applies. The command column is a special command, executed by a query to the port, to which the rule applies.

In this case, you are trying to create as restrictive an environment as possible, so you would want to create a hosts.deny file that looks like this:

# /etc/hosts.deny
#
# Disallow all hosts.
ALL: ALL

This will block all traffic, on all ports to the server. Very secure, but not very practical. To compliment this restrictive hosts.deny file, you can create a hosts.allow file that looks something like this:

# /etc/hosts.allow
#
# Allow all traffic from the server.
ALL: LOCAL
#
# SSH traffic is allowed from the staging server and the
# local network
sshd: staging.domain.com, 10.10.100.

This enables the server to access all services, for sending mail, and other Web interfaces to services on the machine. It also enables all incoming traffic to the Web server, but blocks SSH traffic from all hosts, except the staging server and the local network.

Notice the local network is listed as 10.10.100. The server interprets this to mean that any address from 10.10.100.1 through 10.10.100.254 is allowed to make a SSH connection. If you wanted to be more restrictive you could specify individual workstations that are allowed SSH access to the server.

Of course, this does not prevent a nefarious user from using a CGI exploit, or another security that can be accessed through the httpd server, but it does prevent them from accessing a security exploit on any ports you might have forgotten to close.

An important consideration with a staging server is the level of control you are going to allow the users of the server to have. There are two different models of staging server security. The first model enables users to upload their content to the staging server, test it, and then push it to the live server themselves, using staging tool servers you install. The second method is to have users upload content to the staging server, test it, and then send the server administrator, or group of administrators, an e-mail asking to have the documents published to the live server.

The second obviously provides tighter security control, however it makes a lot more work for the server administrator. It is important to determine how much time you are able to devote to managing content before deciding on a staging server strategy.

The placement of the staging server is an important consideration. Because you are, essentially, leaving this server wide open it is a good idea to place it behind a firewall. This will enable users from your network to access it, but no one from the outside. The live server can then be placed in a DMZ, between the firewall and the Internet, as shown in Figure 19.2, or placed in a collocation facility.

A Web server residing in a firewall DMZ.

Figure 19.2. A Web server residing in a firewall DMZ.

Special Issues for Virtual Hosts

If you administer an Apache Web server for a hosting company, especially a shared server, you might have read this chapter and thought, “There is no way I am going to be able to implement and track these security guidelines for 200 different sites on a server.”

The truth is, many of the more restrictive suggestions in this book are difficult to implement across the board for small hosts. Inevitably a customer will demand to have a feature enabled that you know is a bad idea. The quandary then becomes do you risk losing the revenue from that customer to maintain tighter security, or do you make an exception to your security policy?

Generally speaking, the same rules that apply to a multiuser single site also apply to servers with multiple Web sites:

  • Restrict shell access to the server.

  • Only allow SSI and .htaccess when necessary.

  • Don't enable modules that you are not going to use.

  • Make file permissions as tight as possible.

Fortunately, there are many control panels on the market that aide in maintaining site and system settings, while still giving you full control over Apache.

Control panels come in two varieties: open source, such as Webmin (http://www.webmin.com), and commercial, such as Plesk's Server Administrator (http://www.plesk.com). These control panels allow you to control more than just Apache.

But their Apache control is exceptional, letting you set default directives that are enabled for each account, as well as making exceptions for each individual account.

Special Issues for Windows and Apache

Repeated security issues with Microsoft's Internet Information Server have caused many Windows-based Web server administrators to abandon it and switch to Apache.

Apache is certainly an excellent choice on any platform, but there are some special security issues to be aware of when running Apache on the Windows platform.

If possible, run Apache on Windows 2000 versus Windows NT. Windows 2000 has more secure default system settings, which means you will be running more secure right out of the box.

As with a *nix installation you should be as restrictive as possible with file and directory permissions. Only providing write and execute capabilities when necessary.

It is also important to maintain a secure “user” and “group” structure so users cannot access the files of other users or groups.

Disabling ports and services that you are not using is also important. Windows 2000 does a good job of not running unnecessary services, or leaving unnecessary ports open, but you might have to manually make these changes in Windows NT.

Unlike a *nix systems, Microsoft recommends putting a Web site's root directory in a separate partition or on a different disk. Segmenting the site from the system files will prevent a user from accessing the system files, by changing directories to the server root.

You might have requests from users to access the Web server through a GUI interface such as PCAnywhere, or Terminal Server. Windows does not have the same type of control that *nix does, so if a user has access to the desktop, she can access all files on the server. In other words, this is not a good idea on a shared server.

If you do have a dedicated server, and don't mind giving users access to it, use Windows Terminal instead of PCAnywhere whenever possible. Windows Terminal connections are secure and encrypted. Everything in PCAnywhere is sent in plain text.

Finally, as with any operating system, it is important to keep up with the latest security patches and service packs. Microsoft does a good job of updating users when a new patch is released, but you also have to review and install the patch.

Summary

Hopefully reading through this chapter made you feel more paranoid about the security of your Apache installation.

You should not live in constant panic that your site is being attacked, but you should be aware of security issues, and take reasonable steps to ensure the security of your server.

The basic steps that can be taken to help secure Apache on any server are

  1. Limit access to the server.

  2. Only allow the root user to access the configuration files.

  3. Maintain strict user and group rules.

  4. Restrict the Apache features you enable to only those you need.

  5. Keep up with operating system and Apache security patches.

If you follow these guidelines, you should have no trouble maintaining a secure Apache installation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset