NFS v3 versus NFS v4

Another consideration regarding NFS is the version you'll be using. Nowadays, most (if not all) Linux distributions default to NFS v4. However, there are some cases where you may have older servers on your network, and you'll need to be able to connect to their shares. While NFS v4 is definitely the preferred version going forward, you might need to connect to a node using the older protocol.

In both cases, directories on a file server can be shared via NFS by editing the /etc/exports file, which is where you'll list your shares (exports), one per line. We'll go over this file in more detail in the next section. But for now, keep in mind that the /etc/exports file is where you declare which directories on your filesystem are available for use with NFS. Different versions of NFS have different techniques of handling file locks and they differ in terms of the introduction of idmapd, performance, and security. Also, there are other differences such as NFS v4 moving to TCP-only (previous versions of the protocol allowed either UDP or TCP) and the fact that it is stateful, while previous versions were stateless.

By being stateful, NFS v4 includes file locking as part of the protocol itself, rather than relying on Network Lock Manager (NLM) to provide that function as NFS v3 did. If an NFS server were to crash or become unavailable, one or more nodes that were connected to it may have had open files, which would have been locked to those nodes. When the NFS server starts to back up, it re-establishes these locks and tries to recover from the crash. Although NFS servers do a fairly good job of recovering, they aren't perfect, and at times file locking can become a nightmare for administrators to deal with. With NFS v4, NLM is decommissioned and file locking is a part of the protocol itself, so locks are dealt with much more efficiently. However, it's still not perfect.

So, which version should you use? It's recommended to always use NFS v4 on all of your nodes and servers, unless you're dealing with an older server with older protocols that you still need to support.

Setting up an NFS server

Configuring an NFS server is relatively straightforward. Essentially, all you need to do is install the required packages, create your /etc/exports file, and ensure the required daemons (services) are running. In this activity, we'll set up an NFS server and also connect to it from a different node. In order to do so, it's recommended that you have at least two Linux machines to work with. It doesn't matter if these machines are physical or virtual machines, or any combination of those. If you've already followed through with Chapter 1, Setting up Your Environment, you should already have several nodes to work with; hopefully, a mix of Debian and CentOS, since this procedure differs a bit between them.

First, let's set up our NFS server. Pick a machine to act as the NFS server and install the required packages. It doesn't matter which distribution you choose as your server and which you choose as your client, I'll go over the configuration process for both CentOS and Debian. Since quite a few distributions are either based on Debian or use the same configuration as CentOS, this should work for most distributions out there. If you're using a distribution that doesn't follow either package naming convention, all you have to do is look up which package or meta-package to install on your server for your specific distribution. The rest of the configuration should be the same, since NFS is fairly standard.

To install the required packages on a CentOS system, we would execute the following command:

# yum install nfs-utils

And for Debian, we install nfs-kernel-server:

# apt-get install nfs-kernel-server

Note

During installation of these packages, you may receive an error that NFS hasn't been started, due to /etc/exports not being present on the file system. When you install the required NFS packages on some distributions, this file may not be automatically created. Even if it does get created automatically, the file will just be a skeleton. If you do receive such an error, ignore it. We'll create this file shortly.

Next, we'll want to make sure that the services related to NFS are enabled so that they will start as soon as the server starts up. For CentOS systems, we'll use the following command:

# systemctl enable nfs-server

And for Debian, we can enable NFS via:

# systemctl enable nfs-kernel-server

Keep in mind that we simply enabled the NFS daemon on our server, which means that when the system is restarted, NFS will also be started (providing we configured it properly). However, we don't have to restart our entire server in order to start NFS; we can start that any time after we create our configuration files. Since we haven't actually configured NFS yet, we won't need to start the daemon yet. We'll do that later. In fact, until we actually create our configuration, your distribution probably won't let you start NFS anyway.

The next step is to determine which directories on our server we wish to make available on our network. Which directories you share is pretty much up to you. Anything on the Linux filesystem is a candidate for an NFS export. However, some directories, such as /etc (which contains your systems configuration) or any other system directory, are probably best left private. While you can share any directory on your system, it's actually a common practice to create a single directory to house all of your shares, and then create subdirectories underneath, that you would then share to your clients.

For example, perhaps you would create a directory called exports at the root of your filesystem (mkdir /exports) and then create directories such as docs and images that would be accessible to others. The beauty of this is that your shares could be managed from one place (the /exports directory) and NFS itself has the ability to classify this directory as your export root (we'll discuss this later). Before moving on, create some directories on your filesystem that you'll use to share, as we'll be placing these directories in a configuration file in the next section.

Once you've determined which directories in the file system you'd like to share and created them, you're ready to begin the actual configuration. Each NFS share, referred to as an export, is configured by adding one line per directory we wish to share in the /etc/exports file. Since you've already installed the required packages in order to get NFS on your system, this file may or may not already exist. In my experience, CentOS doesn't create this file during installation while Debian does. But even if you did get a default exports file, it would only contain commented out lines of code that don't have any practical purpose. In fact, you may have even received a warning or error during installation that the NFS daemon wasn't started as /etc/exports was not found. That's fine because we'll create this file soon.

While the default exports file is different from distribution to distribution (if it even gets created by default at all), the format for creating new exports is the same regardless of your chosen distribution, as NFS is fairly standard. The process for adding an export is to open the /etc/exports file in your favorite text editor and add each export to its own line. Any actual text editor will do, as long as it is a text editor and not a word processor. For example, if you're a fan of vim, you can execute the following command:

# vim /etc/exports

If you prefer nano, you can execute the following command:

# nano /etc/exports

In fact, you can even use graphical text editors such as Gedit, Kate, Pluma, or Geany if you would prefer to use GUI tools. These packages are available in the repositories of most distributions.

Note

It probably goes without saying, but to edit files within the /etc directory or any others that are owned by root, you'll need to prefix such commands with sudo in order to edit them if you aren't logged in as root. As a best practice, it's recommended to not log in as root unless you absolutely have to. If you're logged in as a normal user, execute the following command:

sudo vim /etc/exports

In Debian, you'll see that the default /etc/exports file contains a list of comments, which may be helpful to you in viewing how exports are formatted. We can create new exports by simply adding them to the end of the file, preserving the contents. If you'd prefer to start off with a blank file, you may want to back up the original in case you want to refer to it later.

# mv /etc/exports /etc/exports.default

Once you have the file open in your favorite text editor, you should be ready to go. All of the directories you wish to share or export should be placed in this file, one on each line. Then, you append parameters to the share to control how it can be accessed and by whom. Here's an example exports file with some example directories and some basic configuration parameters for each:

/exports/docs 10.10.10.0/24(ro,no_subtree_check)
/exports/images 10.10.10.0/24(rw,no_subtree_check)
/exports/downloads 10.10.10.0/24(rw,no_subtree_check)

As you can see with those example exports, the format of each basically includes the directory we'd like to export, a network address we'd like to allow access to, followed by some additional options in parenthesis. There are many options you can append here, and we'll go over some of them later in this chapter. But if you would like to view all of the options you can set here, refer to the following man command:

man exports

Let's discuss each section of the example exports file that was used previously:

  • /exports/docs: The first section contains the directory we're exporting to other nodes on the network. As mentioned before, you can share pretty much any directory you'd like. But just because you can share a directory doesn't mean you should. Share only the directories that you wouldn't mind others having access to.
  • 10.10.10.0/24: Here, we're limiting access to nodes within the 10.10.10.0/24 network. A node outside of that network will not be able to mount any of these exports. In this example, we could have used 10.10.10.0/255.255.255.0 and we would have achieved the same result. In our example, /24 was used, which is known as the Classless Inter-Domain Routing (CIDR) notation that is a shorthand for typing out the subnet mask. Of course, there is much more to CIDR than that, but for now, just keep in mind that the CIDR notation was used instead of the subnet mask to keep the example shorter (plus, it looks cooler).
  • ro: In the first export (docs), I've set it to read-only for no reason other than to show you that you can. This is probably self-explanatory, but a directory exported as read-only would allow others to mount the export and access the files within it, but not make any changes to anything.
  • rw: A read-write export allows nodes that mount it, to create new files and modify existing ones (as long as the user has the required permissions set on the files themselves).
  • no_subtree_check: While this option is default and we don't actually need to explicitly make a request, not including it may make NFS complain when it restarts. This option is the opposite of subtree_check, which is largely avoided nowadays. This option in particular, controls whether or not the server scans the underlying filesystem when processing actions within exports, which can increase security a bit but lower reliability. As disabling this option is known to increase reliability, it's been made the default in recent versions of NFS.

Although I didn't use it in any of my examples, a common export option you'll see set in /etc/exports is no_root_squash. Setting this option allows the root user on end-user devices to have root access to the files contained within the export. In most cases, this is a bad idea, but you will see this from time to time in the wild. This is the opposite of root_squash, which maps the root user to nobody instead. Unless you have a very good reason to do otherwise, no_root_squash is what you want.

In addition to classifying options for a single network, you can make your exports available to additional networks by adding configuration for them to the same line. Here's an example of our docs mount shared with an additional network:

/exports/docs 10.10.10.0/24(ro,no_subtree_check),192.168.1.0/24(ro,no_subtree_check)

With this example, we're exporting /exports/docs so that it can be accessed by nodes within the 10.10.10.0/24 network and the 192.168.1.0/24 network. While I used the same options for both, you don't have to. You could even configure the export to be read-only for one network and read-write for another if you so desired.

So far, we've been sharing our exports with entire networks. This is done by making the last octet of the allowed IP address a 0. With the last example, any node with an IP address of 10.10.10.x or 192.168.1.x and a subnet mask of 255.255.255.0 would qualify for access to the export. However, you may not always want to give access to an entire network. Perhaps you may want to allow access to a single node instead. You can classify an individual node just as easily:

/exports/docs 10.10.10.191/24(ro,no_subtree_check)

In the previous example, we allowed a node with an IP address of 10.10.10.191 access to our export. Specifying an IP address or network enhances security, though it is not a 100 percent catch-all. However, limiting access to only the hosts that absolutely need it is a very good place to start when building your security policy. We'll cover security in greater detail in Chapter 9, Securing Your Network. But for now, keep in mind that you can limit access to the export by specific networks or individual IPs.

Earlier, we touched on the fact that starting with Version 4, NFS can use a directory to serve as its export root, also known as the NFS pseudo filesystem. In the /etc/exports file, this is identified by placing either fsid=0 or fsid=root as an option while exporting this directory. In this chapter, we've been using /exports to serve as the base of our NFS exports. If we wanted to identify this directory as our export root, we would change the /etc/exports file like this:

/exports *(ro,fsid=0)
/exports/docs 10.10.10.0/24(ro,no_subtree_check)
/exports/images 10.10.10.0/24(rw,no_subtree_check)
/exports/downloads 10.10.10.0/24(rw,no_subtree_check)

At first, this concept might be a big confusing, so let's break this down a bit. In the first line, we identify our export root:

/exports *(ro,fsid=0)

Here, we declare /exports as our export root. This is now the root of the NFS filesystem. Sure, you have a complete filesystem beginning with / in terms of Linux itself, but as far as NFS is concerned, its filesystem now begins here at /exports. In this line, we also declared /exports as read-only. We don't want anyone to make changes to this directory, as it is the NFS root. It's also shared with everyone (notice the *) but that shouldn't matter, as we set more granular permissions for each individual export. With the NFS root in place, clients can now mount these exports without needing to know the full path to get to it.

For example, a user might type the following to mount our downloads export to his or her local filesystem:

# mount 10.10.10.100:/exports/downloads /mnt/downloads

This is how you mount an NFS export from a local file server (10.10.10.100 in this case), which is not using an NFS root. This requires the user to know that the directory is located at /exports/downloads on that server. But with the NFS root in place, we can have the user simplify the mount command as follows:

# mount 10.10.10.100:/downloads /mnt/downloads

Notice that we left out /exports in the previous command. While this may not seem like much, we're basically asking the server to give us the downloads export, wherever it may be on the file system. It doesn't matter if the downloads directory is located at /exports/downloads, /srv/nfs/downloads, or wherever else. We simply ask for the downloads export and the server knows where it is, because we set the NFS root.

Now that we've configured our /etc/exports file, it's a good idea that we edit the /etc/idmapd.conf configuration file to configure some additional options. This isn't absolutely required but it's definitely recommended. The default idmapd.conf file is different from distribution to distribution, but each contains the options we would need to configure in this section. First, look for a line such as the following (or very similar):

# Domain = local.domain

First, we'll need to uncomment that line. Remove the # symbol and the trailing space so that the line begins with Domain. Then, set your domain so that it is the same as other nodes on your network. This domain would most likely have been chosen during installation. If you don't remember what yours is, running the hostname command should give you your domain name, which is immediately after your hostname. Do this for every node you'd like to be able to access NFS exports.

You might be wondering why this is necessary. When user and group accounts are created on a Linux system, they're assigned a UID (User ID) and GID (Group ID). Unless you created your user accounts on all of your systems in the same exact order, the UID and GID will most likely be different on each node. Even if you did create your user and group accounts in the same order, they could still be different. The idmapd file helps us by mapping these UIDs from one system to another. In order for idmapd to work, the idmapd daemon must be running on each node, and the file should also be configured with the same domain name. On both CentOS and Debian, this daemon runs under /usr/sbin/rpc.idmapd and is started along with the NFS server.

So, you might be wondering; what's the purpose of the Nobody-User and Nobody-Group? The nobody user runs scripts or commands that would be dangerous if run by a privileged user. Typically, the nobody user cannot log in to the system and does not have a home directory. If you run a process as nobody, its scope is limited if ever the account should be compromised. In the case of NFS, the nobody user and nobody group serve a special purpose. If the files are owned by a specific user on one system that doesn't exist on another, the permissions for the file will be displayed as being owned by the nobody user and group. This is also true of accessing files via the root user, when no_root_squash is not set. Depending on which distribution you're using, these accounts may have different names. In Debian, both Nobody-User and Nobody-Group default to simply nobody. In CentOS, these are both nobody. You can see in your idmapd.conf file which account is used for the nobody user and nobody group. You shouldn't need to rename these accounts, but if for some reason you do, you'll need to ensure that the idmapd.conf file has the correct names for them.

Now that we have NFS configured and ready to go, how do we start using it? If you've been following along, you may have caught the fact that we enabled the NFS daemon but have yet to start it. Now that the configuration is in place, nothing is stopping us from doing so.

On Debian we can start the NFS daemons by executing the following command:

# systemctl start nfs-kernel-server

On CentOS, we can execute the following command:

# systemctl start nfs-server

From this point onwards, our NFS exports should be shared and ready to go. Later on in this chapter, I'll explain how to mount these exports (as well as Samba shares) on other systems.

There is one more thing in NFS that is worth mentioning. The /etc/exports file is read whenever the NFS daemon starts, which means you can activate new exports after you add them by restarting the server or the NFS daemon. However, in production, it's not practical to restart NFS or the server itself. This would interrupt users that are currently using it and possibly cause stale mounts, which are invalidated connections to network shares (not a good situation to be in). Thankfully, activating new exports without restarting NFS itself is easy. Simply execute the following command and you'll be good to go:

# exportfs -a
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset