Appendix B. Setting up a persistent storage source

The purpose of this appendix is to configure NFS and export a volume to use as the backend for persistent volumes in your OpenShift cluster. In the examples, you’ll set up the OpenShift master as the NFS server. If you want to use a different server, the setup is similar. The main thing you need to be sure of is that your NFS server has connectivity to your OpenShift cluster. In the following sections, you’ll install and configure your OpenShift master as an NFS server.

B.1. Installing the NFS server software

The NFS server software is provided by the nfs-utils package. The first step is to confirm whether this package is installed on the master. The command to do that uses the yum package manager. The output indicates whether nfs-utils is installed. If the package isn’t installed, there’s no output. In a terminal window, run the following command at the prompt to see if the nfs-utils package is installed on your master server:

# rpm -q nfs-utils
nfs-utils-1.3.0-0.33.el7_3.x86_64

If you need to install nfs-utils, running the following yum command in the same terminal will install all the services required to act as an NFS server:

yum -y install nfs-utils

When you have nfs-utils installed on your master server, you need to configure the filesystem that NFS will use for storage. This is detailed in the next section.

B.2. Configuring storage for NFS

In appendix A, you created your master node with two disks. In the example in appendix A, which used VMs on a Linux laptop, the second disk device’s name is /dev/vdb. If you created your VMs using a different platform, or if you’re using physical machines for your cluster, the device name of this disk may be different. If you don’t know the device name for your second disk, you can use the lsblk command on your master server to see all the block devices on your server:

# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                           11:0    1 1024M  0 rom
vda                          252:0    0   10G  0 disk
vda1                       252:1    0    1G  0 part /boot
vda2                       252:2    0    9G  0 part
  cl-root                  253:0    0    8G  0 lvm  /
  cl-swap                  253:1    0    1G  0 lvm  [SWAP]
vdb                          252:16   0   20G  0 disk

B.2.1. Creating a filesystem on your storage disk

In appendix A, when you selected your disk configuration options, you unchecked the second disk on the system. That instructed the installer to ignore that disk when it installed the OS. Now you need to create a filesystem on the second disk. For your needs, an ext4 filesystem will do everything you need. (The ext4 filesystem is a standard filesystem format for Linux servers.) To create a filesystem, you can use the mkfs.ext4 command:

# mkfs.ext4 /dev/vdb
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242880 blocks
262144 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Note

If you’d like more information about the ext4 filesystem and what makes it work, check out the article “An Introduction to Linux’s EXT4 Filesystem” (David Both, opensource.com, https://opensource.com/article/17/5/introduction-ext4-filesystem).

The next step is to configure your master server so that it’s properly mounted when the server starts up.

B.3. Mounting your storage disk at startup

The NFS shared volume you’re creating needs to be available all the time. That means you need to configure your NFS server to mount your newly created filesystem when the host boots up.

B.3.1. Creating a mountpoint directory

In Linux, every mounted filesystem needs a directory to act as a mountpoint. For your NFS server, you need to create that directory in /var to mount your filesystem in. The following command creates the /var/nfs-data directory to serve as the mountpoint for the NFS filesystem:

# mkdir /var/nfs-data/

After the directory is created, you need to gather some information about the filesystem you created to hold your NFS volumes. This information will be used to edit the Linux server to make it mount this filesystem correctly when it boots up.

B.3.2. Getting your storage drive’s block ID

Each block device has a unique identifier (UUID) in Linux. You can view these UUIDs using the blkid command-line tool. Here’s an example of the output:

# blkid
/dev/vda1: UUID="bdda3896-5dbc-4822-b008-78bba4898341" TYPE="xfs"
/dev/vda2: UUID="KsWi8Z-PNi0-Hdgt-akAP-RWfF-9Myp-oL0eKr" TYPE="LVM2_member"
/dev/vdb: UUID="607b9d47-9280-433d-a233-0f40f060ec51" TYPE="ext4"
/dev/mapper/cl-root: UUID="88a37ff5-eaba-4358-80a7-119edf6d30a7" TYPE="xfs"
/dev/mapper/cl-swap: UUID="4a2d0c5c-33f9-46d3-b8e7-4e8c53d562ce" TYPE="swap"
/dev/loop0: UUID="e7a6c25e-d482-4082-bc7d-a845fd2aef17" TYPE="xfs"
/dev/mapper/docker-253:0-12995325-pool: UUID="e7a6c25e-d482-4082-bc7d-
 a845fd2aef17" TYPE="xfs"

In this example, we used the /dev/vdb block device to create the NFS storage filesystem. You can see in the output that our UUID is “607b9d47-9280-433d-a233-0f40f060ec51”. Make a note of the UUID for your device; you’ll need it in the next section.

The next step is to configure your server to automatically mount the volume correctly when it boots up.

B.3.3. Editing /etc/fstab to include your volume

On a Linux server, /etc/fstab is the configuration file that contains all the filesystems and partitions that should be mounted automatically when the server boots up, along with their mount options. The following listing shows an example /etc/fstab file; the same file should be similar on your system.

Listing B.1. An example /etc/fstab configuration file
#
# /etc/fstab
# Created by anaconda on Fri May 12 19:39:58 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=bdda3896-5dbc-4822-b008-78bba4898341 /boot                   xfs
 defaults        0 0
/dev/mapper/cl-swap     swap                    swap    defaults        0 0

Each mountpoint in /etc/fstab has several parameters. They’re as follows, from left to right:

  • Device to be mountedIn this case, you’ll use the UUID that you noted earlier.
  • Mount point for the block deviceThis is the /var/nfs-data directory that you created earlier.
  • Type of filesystemThis is ext4 for your new line in /etc/fstab.

The rest of the options are beyond the scope of this appendix. You can use defaults 0 0.

In this example, the following line was added to the end of /etc/fstab:

UUID=607b9d47-9280-433d-a233-0f40f060ec51 /var/nfs-data    ext4     defaults 0 0

B.3.4. Activating your new mount point

After adding your new line to /etc/fstab, you can use the mount -a command to have the server re-read /etc/fstab and mount anything that isn’t already mounted. After it completes, you can make sure it’s mounted properly by running the mount command with no additional parameters. Following are examples of these commands and their output:

# mount -a
# mount
...
/dev/mapper/cl-root on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs
 (rw,relatime,fd=31,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
/dev/vda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
tmpfs on /run/user/0 type tmpfs
 (rw,nosuid,nodev,relatime,seclabel,size=388192k,mode=700)
/dev/vdb on /var/nfs-data type ext4
 (rw,relatime,seclabel,data=ordered)                        1

  • 1 /dev/vdb is mounted at /var/nfs-data, just like we want.

At this point, the filesystem is ready to go. The next step is to configure NFS to share /var/nfs-data over the network.

B.4. Configuring NFS

Because several examples in this book require NFS storage, you’ll need to export five different NFS volumes. In NFS, an exported volume is a unique directory specified in the /etc/exports configuration file. You need to create these directories in /var/nfs-data. You can create them all with a single command, as follows:

# mkdir -p /var/nfs-data/{pv01,pv02,pv03,pv04,pv05}

After creating your export directories, the next step is to add them to your NFS server’s configuration.

By default, the /etc/exports configuration file is empty. You’ll edit this file to add all the volumes you want to export, along with their permissions, as shown in the following listing.

Listing B.2. Configuration to add to /etc/exports for your cluster
/var/nfs-data/pv01 *(rw,root_squash)
/var/nfs-data/pv02 *(rw,root_squash)
/var/nfs-data/pv03 *(rw,root_squash)
/var/nfs-data/pv04 *(rw,root_squash)
/var/nfs-data/pv05 *(rw,root_squash)

Looking at each line in this file from left to right, let’s break down what the configuration means for each export:

  • Directory to be exported by NFSOne entry for each of the directories you just created.
  • Servers allowed to connect to this NFS shareThe asterisk allows any server to access these shares.
  • Mount permissionsThese options are in parentheses. For these exports, you’ll allow read-write (rw) access and not allow the root user to mount the volume (root_squash).

Because you aren’t allowing the root user to mount any of these NFS volumes, you need to make sure the permissions on the directories are correct.

B.4.1. Setting ownership of the mountpoint

OpenShift will connect to the NFS shares using the nfsnobody user: a special user used by NFS servers that’s used when root user access isn’t allowed. You can use the chown and chmod commands to properly set the ownership of /var/nfs-data and allow access to the directory only for the nfsnobody user. After setting the proper ownership and permissions, you can confirm them:

# chown -R nfsnobody.nfsnobody /var/nfs-data/                 1
# chmod -R 0770 /var/nfs-data/                                2
# ls -al /var/nfs-data/                                       3
total 24
drwxrwx---.  7 nfsnobody nfsnobody 4096 Jun 17 21:27 .
drwxr-xr-x. 20 root      root       283 Jun 17 01:13 ..
drwxrwx---.  2 nfsnobody nfsnobody 4096 Jun 17 21:16 pv01
drwxrwx---.  2 nfsnobody nfsnobody 4096 Jun 17 21:16 pv02
drwxrwx---.  2 nfsnobody nfsnobody 4096 Jun 17 21:27 pv03
drwxrwx---.  2 nfsnobody nfsnobody 4096 Jun 17 21:27 pv04
drwxrwx---.  2 nfsnobody nfsnobody 4096 Jun 17 21:27 pv05

  • 1 Sets ownership to nfsnobody, using the -R option to act on the directory recursively
  • 2 Sets the mode so that only the nfsnobody user and group can access the directory, using the -R option again to act recursively
  • 3 Confirms that the ownership and permissions for /var/nfs-data are correct

Because NFS is a filesystem served over a network, you need to make sure the network firewall on your master server will allow the NFS traffic through. This is covered in the next section.

B.5. Setting firewall rules to allow NFS traffic

You’ll be using NFS version 4 (NFSv4) to connect to these exported volumes. This version of the NFS protocol requires TCP port 2049 to be open. You can check that status using the following command:

# iptables -L -v -n | grep 2049
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0
     0.0.0.0/0            state NEW tcp dpt:2049

If you don’t get any output from this command, you can add a rule to your firewall using the following iptables command:

# iptables -I INPUT -p tcp --dport 2049 -j ACCEPT

After running this command, you can rerun the previous iptables command, and you should see a result. If you do, then you’ve configured your firewall correctly.

The last thing you need to do for your network configuration is to save your new settings. You do so using the following service command in Linux:

# service iptables save
Note

The default firewall utility for CentOS and RHEL 7 is firewalld. OpenShift is still working to integrate completely with this tool. Currently, the OpenShift installer disables firewalld. For our example, because we’re using the OpenShift master as our NFS server, we’re using the older iptables commands and the service command to save our firewall rules. If you’re using a different server, you can set up NFS using firewalld.

With this completed, the last things to do is to enable and start the NFS services.

B.6. Enabling and starting NFS

What we call NFS is actually a collection of four services that you need to enable and start:

  • rpcbind—NFS uses the RPC protocol to transfer data.
  • nfs-server—The NFS server service.
  • nfs-lock—Handles file locking for NFS volumes.
  • nfs-idmap—Handles user and group mapping for NFS volumes.

B.6.1. Starting NFS services

If you’re using the OpenShift master as your NFS server, these services are already enabled and turned on. In that case, you need to restart the services, using the following command that loops through all the services that make NFS work properly:

for i in rpcbind nfs-server nfs-lock nfs-idmap;do systemctl restart $i;done

If you’re using another server to host your NFS server, enable these services and start them using the following command:

for i in rpcbind nfs-server nfs-lock nfs-idmap;do systemctl enable
 $i;systemctl start $i;done

Now you can check your system to make sure your new volume is exported.

B.6.2. Confirming that your NFS volume is exported and ready to use

To see all the volumes exported by NFS in Linux, you can use the exportfs command-line tool. On the OpenShift master, you’ll see several exported volumes, similar to the following example. On an independent server, you’ll see only the volumes you exported in the /var/nfs-data directory:

# exportfs
/var/nfs-data/pv01          1
        <world>             1
/var/nfs-data/pv02          1
        <world>             1
/var/nfs-data/pv03          1
        <world>             1
/var/nfs-data/pv04          1
        <world>             1
/var/nfs-data/pv05          1
        <world>
/exports/registry
        <world>
/exports/metrics
        <world>
/exports/logging-es
        <world>
/exports/logging-es-ops
        <world>

  • 1 Indicates that the volume is exported and ready for use as an NFS volume. The <world> notation means any host can access the volume, just as you configured in /etc/exports.

And that’s it! You now have an NFS volume that’s ready to be used by OpenShift to provide persistent storage to your containerized applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset