Chapter 16
Looking at Access and Authentication Methods

  • images Objective 3.2: Given a scenario, configure and implement appropriate access and authentication methods.

images Part of properly securing a system and its data involves providing appropriate access and authentication methods. There are many tools available to provide these services. However, it is crucial to understand how they work and how to configure them appropriately.

We’ll take a look at the various authentication and access methods, where their configuration files are stored, and how to properly configure them. We’ll cover some important encryption and authentication topics as well.

Getting to Know PAM

Pluggable Authentication Modules (PAMs) provide centralized authentication services for Linux and applications. Originally invented by Sun Microsystems, the Linux-PAM project started in 1997. Today, PAM is typically used on all Linux distributions.

Programs that wish to use PAM services are compiled with the PAM library, libpam.so, and have an associated PAM configuration file. Applications that use PAM are called “PAM-aware.” You can quickly determine if a program is PAM-aware via the ldd command. A snipped example is shown in Listing 16.1.

Listing 16.1: Using ldd to determine if application is PAM-aware

# ldd /bin/login | grep libpam.so
        libpam.so.0 => /lib64/libpam.so.0 (0x00007fbf2ce71000)
#

In Listing 16.1, the ldd utility is employed to display all the program’s shared library dependencies. The display output is piped into grep to search for only the PAM libpam.so library. In this case, the application is compiled with the PAM library. Besides being compiled with the PAM libpam.so library, the application needs to have a configuration file to use PAM.

Exploring PAM Configuration Files

PAM configuration files are located in the /etc/pam.d/ directory. Listing 16.2 shows this directory’s files on a CentOS distribution.

Listing 16.2: Viewing the /etc/pam.d/ directory’s contents

$ ls /etc/pam.d/
atd                     gdm-pin           postlogin-ac       su
chfn                    gdm-smartcard     ppp                sudo
chsh                    ksu               remote             sudo-i
config-util             liveinst          runuser            su-l
crond                   login             runuser-l          system-auth
cups                    other             setup              system-auth-ac
fingerprint-auth        passwd            smartcard-auth     systemd-user
fingerprint-auth-ac     password-auth     smartcard-auth-ac  vlock
gdm-autologin           password-auth-ac  smtp               vmtoolsd
gdm-fingerprint         pluto             smtp.postfix       xrdp-sesman
gdm-launch-environment  polkit-1          sshd               xserver
gdm-password            postlogin         sssd-shadowutils
$

Notice in Listing 16.2 that there is a login configuration file. This file is displayed snipped in Listing 16.3.

Listing 16.3: Viewing the /etc/pam.d/login file’s contents

$ cat /etc/pam.d/login
#%PAM-1.0
[…]
auth       include      postlogin
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
[…]
session    optional     pam_keyinit.so force revoke
[…]
$

The records in a PAM configuration file have a specific syntax. That syntax is as follows:

TYPE CONTROL-FLAG PAM-MODULE [MODULE-OPTIONS]

The TYPE, sometimes called a context or module interface, designates a particular PAM service type. The four PAM service types are shown in Table 16.1.

Table 16.1 The TYPE in /etc/pam.d/ configuration file records

Interface Service Description
account Implements account validation services, such as enforcing time of day restrictions as well as determining if the account has expired
auth Provides account authentication management services, such as asking for a password and verifying that the password is correct
password Manages account passwords, such as enforcing minimum password lengths and limiting incorrect password entry attempts
session Provides authenticated account session management for session start and session end, such as logging when the session began and ended as well as mounting the account’s home directory, if needed

The PAM-MODULE portion of the /etc/pam.d/ configuration file record is simply the file name of the module that will be doing the work. For example, pam_nologin.so is shown in the /etc/pam.d/login configuration file, back in Listing 16.3. Additional modules options can be included after the module’s file name.

A designated PAM-MODULE is called in the order it is listed within the PAM configuration file. This is called the module stack. Each PAM-MODULE returns a status code, which is handled via the record’s CONTROL-FLAG setting. Together these status codes and settings create a final status, which is sent to the application. Table 16.2 lists the various control flags and their responses or actions.

Table 16.2 The CONTROL-FLAG settings for /etc/pam.d/ configuration file records

Control Flag Description
include Adds status codes and response ratings from the designated PAM configuration files into the final status.
optional Conditionally adds the module’s status code to the final status. If this is the only record for the PAM service type, it is included. If not, the status code is ignored.
requisite If the module returns a fail status code, a final fail status is immediately returned to the application without running the rest of the modules within the configuration file.
required If the module returns a fail status code, a final fail status will be returned to the application, but only after the rest of the modules within the configuration file run.
substack Forces the included configuration files of a particular type to act together returning a single status code to the main module stack.
sufficient If the module returns a success status code and no preceding stack modules have returned a fail status code, a final success status is immediately returned to the application without running the rest of the modules within the configuration file. If the module returns a fail status code, it is ignored.

The /etc/pam.d/ configuration files’ module stack process of providing a final status is a little confusing. A simplification to help you understand the progression is depicted in Figure 16.1.

The figure shows the PAM module stack process.

Figure 16.1 The PAM module stack process

Using Figure 16.1 as a guide, imagine the application subject (user) needs authentication to access the system. The appropriate /etc/pam.d/ configuration file is employed. Going through the authentication module stack, the user passes through the various security checkpoints. At each checkpoint, a guard (PAM module) checks a different requirement, determines whether or not the user has the required authentication, and issues a fail or success card. The final guard reviews the status cards along with their control flags listed on his clipboard. This guard determines whether or not the subject may proceed through the “System Access” doorway. Of course, keep in mind that if any of the checkpoints are listed as requisite, and the user fails that checkpoint, he would be immediately tossed out.

Enforcing Strong Passwords

When a password is modified via the passwd command, PAM is employed. These various PAM modules can help to enforce strong passwords:

  • pam_unix.so
  • pam_pwhistory.so
  • pam_pwquality.so

images Typically you’ll find the pam_pwquality.so module installed by default. However, for Ubuntu, you will need to manually install it. Use an account with super user privileges and type sudo apt-get install libpam-pwquality at the command line.

The pam_unix.so module performs authentication using account and password data stored in the /etc/passwd and /etc/shadow files.

The pam_pwhistory.so module checks a user’s newly entered password against a history database to prevent a user from reusing an old password. The password history file, /etc/security/opasswd, is locked down. Passwords are also stored salted and hashed, using the same hashing algorithm employed for passwords stored in the /etc/shadow file.

To use the pam_pwhistory.so module, you must modify one of the /etc/pam.d configuration files. Along with specifying the password type and the module name, you can set one or more of the MODULE-OPTIONS listed in Table 16.3.

Table 16.3 The MODULE-OPTIONS for password reuse prevention

Module Option Description
enforce_for_root If this option is used, the root account must have its password checked for reuse when resetting its password.
remember=N Designates that N passwords will be remembered. The default is 10, and the maximum is 400.
retry=N Limits the number of reused password entries to N before returning with an error. The default is 1.

For Ubuntu, you need to put this configuration information in the /etc/pam.d/ common-password and /etc/pam.d/common-auth files. For other distributions, you put this configuration in the system’s default /etc/pam.d/ files password-auth and system-auth.

imagesIf you directly modify the /etc/pam.d/password-auth and system-auth files, they can be overwritten by the authconfig utility. You can avoid this by creating a local file instead, such as password-auth-local. Red Hat has an excellent description of how to accomplish this task. Just use your favorite search engine and type in Hardening Your System with Tools and Services Red Hat to find this information.

A snipped example of the newly modified CentOS /etc/pam.d/password-auth file is shown in Listing 16.4.

Listing 16.4: Viewing the modified /etc/pam.d/password-auth file

# grep password /etc/pam.d/password-auth
[…]
password    required      pam_pwhistory.so
password    sufficient    pam_unix.so […] use_authtok
[…]
#

In Listing 16.4, the grep command is employed to search for PAM password type records. The newly added pam_pwhistory.so module record uses a required control flag and no options. Note that the next record is for the pam_unix.so module and it uses the use_authtok option, which tells the module to use the password already entered instead of prompting for it again. Typically, it is best to place the password history record directly above this pam_unix.so record.

images The pam_pwhistory.so module is not compatible with Kerberos and LDAP. Before employing it, be sure to review its man pages.

Now that password history is being enforced, you can test it by trying to reset your password to the current password. A snipped example is shown in Listing 16.5.

Listing 16.5: Trying to reuse an old password after password history is enforced

$ passwd
Changing password for user Christine.
Changing password for Christine.
(current) UNIX password:
New password:
BAD PASSWORD: The password is the same as the old one
[…]
passwd: Have exhausted maximum number of retries for service
$

Using pam_pwquality.so, you can enforce rules for new passwords, such as setting a minimum password length. You can configure needed directives within the /etc/security/pwquality.conf file or pass them as module options. A snipped example of the file is shown in Listing 16.6.

Listing 16.6: Viewing the /etc/security/pwquality.conf file’s contents

$ cat /etc/security/pwquality.conf
# Configuration for systemwide password quality limits
[…]
# difok = 5
[…]
# minlen = 9
[…]
# dcredit = 1
[…]
$

images The pam_pwquality.so module replaces the older, deprecated pam_cracklib.so module. The modules act similarly. So if you are familiar with the deprecated pam_cracklib.so, then the pam_pwquality.so configuration is recognizable.

There are several password quality directives you can set within the pwquality.conf file. Table 16.4 describes the more common ones.

Table 16.4 Common password quality directives in the pwquality.conf file

Directive Description
minlen = N Enforces the minimum number N of characters for a new password. (Default is 9 and minimum allowed is 6.) The *credit settings affect this directive as well.
dcredit = N If N is positive, adds N credits to password’s minlen setting for any included digits. If N is negative, N digits must be in the password. (Default is 1.)
ucredit = N If N is positive, adds N credits to password’s minlen setting for any included uppercase characters. If N is negative, N uppercase characters must be in the password. (Default is 1.)
lcredit = N If N is positive, adds N credits to password’s minlen setting for any included lowercase characters. If N is negative, N lowercase characters must be in the password. (Default is 1.)
ocredit = N If N is positive, adds N credits to password’s minlen setting for any included other characters (not letters or numbers). If N is negative, N other characters must be in the password. (Default is 1.)
difok = N Enforces the number N of characters that must be different in new password.

To help you understand Table 16.4’s credit directives, let’s focus on the dcredit setting. If you set dcredit = -3, this means that three digits must be in the new password. If you set dcredit = 3, this means that if there are three digits in the new password, the password required length minlen has now been reduced by three.

Once you have the pwquality.conf file directives completed, you’ll need to enable the pam_pwquality.so module within the proper /etc/pam.d/ configuration file. This is similar to how the pwhistory.so module is handled.

Locking Out Accounts

A brute-force attack occurs when a malicious user attempts to gain system access via trying different passwords over and over again for a particular system account. To prevent these attacks, you can lock out a user account after a certain number of failed attempts.

images Be very careful when modifying PAM configuration files for user account lockout. If they are configured incorrectly, you could lock out all accounts, including your own and/or the root account.

The pam_tally2.so and pam_faillock.so modules allow you to implement account lockout. Which one you choose depends upon your distribution (for example, pam_faillock is not installed by default on Ubuntu) as well as the various module options you wish to employ.

The two modules share three key module options. They are as described in Table 16.5.

Table 16.5 Key pam_tally2.so and pam_faillock.so module options

Module Option Description
deny = N Locks account after N failed password entries. (Default is 3.)
silent Displays no informational messages to user.
unlock_time = N Unlocks a locked account after being locked for N seconds. If this option is not set, an administrator must manually unlock the account.

On a current Ubuntu distribution, it is typically better to use the pam_tally2.so module. Keep in mind on a current CentOS distro, it may not work well. In Listing 16.7 a snipped display of a modified /etc/pam.d/common-auth file includes this module.

Listing 16.7: Viewing an Ubuntu /etc/pam.d/common-auth file’s contents

$ cat /etc/pam.d/common-auth
auth    required     pam_tally2.so  deny=2 silent
[…]
auth    […]          pam_unix.so nullok_secure
[…]
$

The pam_tally2.so configuration in Listing 16.7 allows only two failed login attempts prior to locking the account. Also, it does not automatically unlock the account after a certain time period.

images On Ubuntu systems, the pam-auth-update utility is involved in managing PAM modules. Before you modify PAM configuration files on an Ubuntu system, it is a good idea to understand how this utility works. Review its man pages for details.

The pam_tally2 command allows you to view failed login attempts. Listing 16.8 shows an example of this on an Ubuntu distribution.

Listing 16.8: Employing the pam_tally2 utility to view login failures

$ sudo pam_tally2
Login   Failures Latest failure     From
user1       4    11/08/19 16:28:14  /dev/pts/1
$

In Listing 16.8, the user1 account has four login attempt failures. Since the pam_tally2.so module option is set to deny=2, the account is now locked. You cannot unlock an account that has been locked by PAM via the usermod or passwd utility. Instead, you have to employ the pam_tally2 command and add the -r (or --reset) and -u (or --user) options as shown in Listing 16.9. This wipes out the login failure tally so that the account is no longer locked out.

Listing 16.9: Using the pam_tally2 utility to reset login failure tallies

$ sudo pam_tally2 -r -u user1
Login           Failures Latest failure     From
user1               4    11/08/19 16:28:14  /dev/pts/1
$
$ sudo pam_tally2
$

images The pam_tally2.so module has useful module options in addition to those shown in Table 16.5. Also, the pam_tally2 command has some further helpful switches. These items share a man page. You can review it by typing in man pam_tally2 at the command line.

On a current CentOS distribution, it is typically better to use the pam_faillock.so module. Listing 16.10 shows a snipped display of a modified /etc/pam.d/system-auth file that includes this module.

Listing 16.10: Viewing a CentOS /etc/pam.d/system-auth file’s contents

# cat /etc/pam.d/system-auth
[…]
auth        required      pam_env.so
auth        required      pam_faillock.so preauth silent audit deny=2
auth        required      pam_faildelay.so delay=2000000
auth        sufficient    pam_unix.so nullok try_first_pass
auth        [default=die] pam_faillock.so authfail audit deny=2
auth        sufficient    pam_faillock.so authsucc audit deny=2
[…]
account     required      pam_faillock.so
account     required      pam_unix.so
[…]
#

Notice in Listing 16.10 that there are four pam_faillock.so module records. Within these records are a few options and one control flag that have not yet been covered:

  • preauth: If there have been a large number of failed consecutive authentication attempts, block the user’s access.
  • audit: If a nonexistent user account is entered, log the attempted account name.
  • [default=die]: Returned code treated as a failure. Return to the application immediately.
  • authfail: Record authentication failure into the appropriate user tally file.
  • authsucc: Identify failed authentication attempts as consecutive or non-consecutive.

To have pam_faillock.so work correctly, you need to modify the password-auth file as well and add the exact same records as were added to the Listing 16.10 file. It is also located in the /etc/pam.d/ directory.

The faillock command allows you to view failed login attempts. Listing 16.11 shows an example of this on a CentOS distribution.

Listing 16.11: Using the faillock utility to view and reset login failure tallies

# faillock
user1:
When                Type  Source   Valid
2018-11-08 17:47:23 TTY   tty2         V
2018-11-08 17:47:31 TTY   tty2         V
#
# ls -F /var/run/faillock
user1
#
# faillock --reset --user user1
#
# faillock
user1:
When                Type  Source   Valid
#

Notice in Listing 16.11 that the faillock utility displays records for each failed login attempt. In this case, since deny=2 is set, the user1 account is locked out. To unlock the account, the faillock command is used again with the appropriate options. Another item to note within Listing 16.11 is the /var/run/faillock directory. When the pam_faillock.so module is configured, each user receives a failed login attempt tally file within this directory. However, the file is not created until a login failure first occurs.

images

PAM Integration with LDAP

To allow multiple servers to share the same authentication database, many companies use a network authentication system. Microsoft Active Directory is the most popular one used today. However, in the open-source world, Lightweight Directory Access Protocol (LDAP) provides this service, with the favored implementation being the OpenLDAP package. Most Linux distributions include both client and server packages for implementing LDAP in a Linux network environment.

If you are using LDAP on your system, you can integrate it with PAM. The pam_ldap.so module is the primary module for this purpose. It provides authentication and authorization services as well as managing password changes for LDAP. The pam_ldap.so module’s fundamental configuration file is the /etc/ldap.conf file.

You will need to modify the appropriate PAM configuration file(s). This may include manually editing the /etc/pam.d/system-auth file on a CentOS system or using the pam-auth-update utility on an Ubuntu distribution to modify the /etc/pam.d/common-* files. Depending on the system’s distribution, there may be additional configuration activities for integrating PAM with LDAP. See your distribution-specific documentation and/or man pages for more details.

Limiting Root Access

It is best to employ the sudo command (see Chapter 15) to gain super user privileges as opposed to logging into the root user account. Even better is to have the root account disabled for login via its /etc/shadow file record. However, if you absolutely must log in to the root account, you can limit the locations where this is done.

If properly configured, the pam_securetty.so PAM module and the /etc/securetty file are used to restrict root account logins. They do so by limiting root logins only to devices listed in the secure TTY file. A snipped listing of this file on an Ubuntu distro is shown in Listing 16.12.

Listing 16.12: Viewing the /etc/securetty file

$ cat /etc/securetty
# /etc/securetty: list of terminals on which root is allowed to login.
# See securetty(5) and login(1).
[…]
console

# Local X displays […]
:0
:0.0
:0.1
[…]
# Virtual consoles
tty1
tty2
tty3
tty4
[…]
$

To understand the /etc/securetty file records, you need to understand how TTY terminals are represented. When you log in to a virtual console, typically reached by pressing a Ctrl+Alt+Fn key sequence, you are logging into a terminal that is represented by a /dev/tty* file. For example, if you press Ctrl+Alt+F2 and log into the tty2 terminal, that terminal is represented by the /dev/tty2 file. Notice that the /etc/securetty file records in Listing 16.12 only show the virtual console terminal name (e.g., tty4) and not its device file.

images If you are in a terminal emulator or logged into a console terminal, you can view your own process’s current terminal by entering tty at the command line.

When you log into the system via its graphical interface, a who or w command’s output will show something similar to :0 in your process’s TTY column. In Listing 16.12, you can find records for those logins as well.

If you then open a terminal emulator program, you are opening a TTY terminal, called a pseudo-TTY, that is represented by a /dev/pts/* file, such as /dev/pts/0. These TTY terminals are not listed within the /etc/securetty file because the user has already logged into the graphical environment.

images If your system employs the pam_securetty.so module but there is no /etc/securetty file, the root user can access the system via any device, such as a console terminal or network interface. This is considered an insecure environment.

The pam_securetty.so module is typically placed within either the /etc/pam.d/login and/or the /etc/pam.d/remote configuration files. An example of this is shown snipped on an Ubuntu distribution in Listing 16.13.

Listing 16.13: Finding the files that use the pam_securetty.so module

$ grep pam_securetty /etc/pam.d/*
/etc/pam.d/login:auth […] pam_securetty.so
$

images While this configuration will disable root account logins at tty* and :0 devices, it does not disable all root logins. The root account can still be accessed via SSH utilities, such as ssh and scp. (SSH is covered later in this chapter.) In addition, the su and sudo commands (covered in Chapter 15) are not hampered from accessing the root account by this PAM configuration.

On this Ubuntu distribution, only the login PAM configuration file includes the pam_securetty.so module. Notice in Listing 16.13 that the PAM service type used for this module is auth.

Exploring PKI Concepts

The primary purpose of cryptography is to encode data in order to hide it or keep it private. In cryptography, plain text (text that can be read by humans or machines) is turned into ciphertext (text that cannot be read by humans or machines) via cryptographic algorithms. Turning plain text into ciphertext is called encryption. Converting text from ciphertext back into plain text is called decryption.

Cryptographic algorithms use special data called keys for encrypting and decrypting; they are also called cipher keys. When encrypted data is shared with others, some of these keys must also be shared. Problems ensue if a key from a trustworthy source is snatched and replaced with a key from a nefarious source. The public key infrastructure (PKI) helps to protect key integrity. The PKI is a structure built from a team of components that work together to prove authenticity and validation of keys as well as the people or devices that use them.

Getting Certificates

A few members of the PKI team are the certificate authority (CA) structure and CA-issued digital certificates. After verifying a person’s identity, a CA issues a digital certificate to the requesting person. The digital certificate provides identification proof along with an embedded key, which now belongs to the requester. The certificate holder can now use the certificate’s key to encrypt data and sign it using the certificate. This provides authenticity and validation for those that will decrypt the data, especially if it is transmitted over a network.

Digital certificates issued from a CA take effort to obtain as well as money. If you are simply developing a new application, are in its testing phase, or are practicing for a certification exam, you can generate and sign your own certificate. This type of certificate is called a self-signed digital certificate. While self-signed certificates are useful in certain situations, they should never be used in a production environment.

Discovering Key Concepts

It is critical to understand cipher keys and their role in the encryption/decryption process. Cipher keys come in two flavors—private and public/private.

Private Keys Symmetric keys, also called private or secret keys, encrypt data using a cryptographic algorithm and a single key. Plain text is both encrypted and decrypted using the same key, and it is typically protected by a password called a passphrase. Symmetric key cryptography is very fast. Unfortunately, if you need others to decrypt the data, you have to share the private key, which is its primary disadvantage.

Public/Private Key Pairs Asymmetric keys, also called public/private key pairs, encrypt data using a cryptographic algorithm and two keys. Typically the public key is used to encrypt the data and the private key decrypts the data. The private key can be protected with a passphrase and is kept secret. The public key of the pair is meant to be shared.

Asymmetric keys are used by system users as well as many applications, such as SSH. Figure 16.2 provides a scenario of using a public/private key pair between two people.

The figure shows a scenario of using a public/private key pair between two people.

Figure 16.2 Asymmetric encryption example

Notice in Figure 16.2 that in order for Bob to encrypt data (a message in this case) for Helen, he must use her public key. Helen in turn uses her private key to decrypt the data. However, problems occur if Bob is not sure that he is really getting Helen’s public key. He may be getting a public key from a nefarious user named Evelyn and accidentally send his encrypted message to her. This is a man-in-the-middle attack. Digital signatures, which are covered later, help in this situation.

Securing Data

An important concept in PKI and cryptography is hashing. Hashing uses a one-way mathematical algorithm that turns plain text into a fixed-length ciphertext. Because it is one way, you cannot “de-hash” a hashed ciphertext. The ciphertext created by hashing is called a message digest, hash, hash value, fingerprint, or signature.

The beauty of a cryptographic message digest is that it can be used in data comparison. For example, if hashing produces the exact same message digest for plain-text FileA and for plain-text FileB, then both files contain the exact same data. This type of hash is often used in cyber-forensics.

images Hashing is useful for things like making sure a large downloaded file was not corrupted when it was being transferred. However, cryptographic hashing must use an algorithm that is collision free. In other words, the hashing algorithm cannot create the same message digest for two different inputs. Some older hash algorithms, such as MD5, are not collision free.

Be aware that simple message digests, called non-salted and non-keyed message digests, are created only using the plaintext file as input. This hash can be strengthened by adding salt, which is random data added along with the input file to protect the hash from certain malicious attacks. A salted hash is used in the /etc/shadow file to protect passwords.

A keyed message digest is created using the plaintext file along with a private key. This cryptographic hash type is strong against multiple malicious attacks and often employed in Linux applications, such as SSH.

Signing Transmissions

Another practical implementation of hashing is in digital signatures. A digital signature is a cryptographic token that provides authentication and data verification. It is simply a message digest of the original plain-text data, which is then encrypted with a user’s private key and sent along with the ciphertext.

The ciphertext receiver decrypts the digital signature with the sender’s public key so the original message digest is available. The receiver also decrypts the ciphertext and then hashes its plain-text data. Once the new message digest is created, the data receiver can compare the new message digest to the sent message digest. If they match, the digital signature is authenticated, which means the encrypted data did come from the sender. Also, it indicates the data was not modified in transmission.

images A malicious individual can intercept a signed transmission, replace the ciphertext with new ciphertext, and add a new digital signature for that data. Signing transmissions alone does not protect from a man-in-the- middle attack. It is best to employ this method along with digital certificates and other security layers.

Using SSH

When you connect over a network to a remote server, if it is not via an encrypted method, network sniffers can view the data being sent and received. Secure Shell (SSH) has resolved this problem by providing an encrypted means for communication. It is the de facto standard software used by those wishing to send data securely to/from remote systems.

SSH employs public/private key pairs (asymmetric) for its encryption. When an SSH connection is being established between two systems, each sends its public key to the other.

Exploring Basic SSH Concepts

You’ll typically find OpenSSH (www.openSSH.com) installed by default on most distributions. However, if for some reason you are unable to use basic SSH services, you may want to check if the needed OpenSSH packages are installed (managing packages was covered in Chapter 13). Table 16.6 shows the distributions used by this book and their basic OpenSSH service package names.

Table 16.6 Various distro’s OpenSSH package names

Distribution The OpenSSH Package Names
CentOS openssh, openssh-clients, openssh-server
Fedora openssh, openssh-clients, openssh-server
openSUSE openssh
Ubuntu openssh-server, openssh-client

To create a secure OpenSSH connection between two systems, use the ssh command. The basic syntax is as follows:

ssh [options] username@hostname

 

images If you attempt to use the ssh command and get a no route to host message, first check if the sshd daemon is running. On a systemd system, the command to use with super user privileges is systemctl status sshd. If the daemon is running, check your firewall settings, which are covered in Chapter 18.

For a successful encrypted connection, both systems (client and remote) must have the OpenSSH software installed and the sshd daemon running. A snipped example is shown in Listing 16.14 connecting from a CentOS system to a remote openSUSE Linux server.

Listing 16.14: Using ssh to connect to a remote system

$ ssh C
The authenticity of host '192.168.0.105 (192.168.0.105)' can't be established.
ECDSA key fingerprint is SHA256:BnaCbm+ensyrkflKk1rRSVwxHi4NrBWOOSOdU+14m7w.
ECDSA key fingerprint is MD5:25:36:60:b7:99:44:d7:74:1c:95:d5:84:55:6a:62:3c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.105' (ECDSA) to the list of known hosts.
Password:
[…]
Have a lot of fun...
Christine@linux-1yd3:~> ip addr show | grep 192.168.0.105
    inet 192.168.0.105/24 […] dynamic eth1
Christine@linux-1yd3:~>
Christine@linux-1yd3:~> exit
logout
Connection to 192.168.0.105 closed.
$
$ ls .ssh
known_hosts
$

In Listing 16.14, the ssh command uses no options, includes the remote system account username, and uses the remote system’s IPv4 address instead of its hostname. Note that you do not have to use the remote system account username if the local account name is identical. However, in this case, you do have to enter the remote account’s password to gain access to the remote system.

The OpenSSH application keeps track of any previously connected hosts in the ~/.ssh/known_hosts file. This data contains the remote servers’ public keys.

images The ~/ symbol combination represents a user’s home directory. You may also see in documentation $HOME as the representation. Therefore, to generically represent any user’s home directory that contains a hidden subdirectory .ssh/ and the known_hosts file, it is written as ~/.ssh/known_hosts or $HOME/.ssh/known_hosts.

If you have not used ssh to log in to a particular remote host in the past, you’ll get a scary looking message like the one shown in Listing 16.14. The message just lets you know that this particular remote host is not in the known_hosts file. When you type yes at the message’s prompt, it is added to the collective.

images If you have previously connected to the remote server and you get a warning message that says WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED, pay attention. It’s possible that the remote server’s public key has changed. However, it may also indicate that the remote system is being spoofed or has been compromised by a malicious user.

The rsync utility, which was covered in Chapter 3, can employ SSH to quickly copy files to a remote system over an encrypted tunnel. To use OpenSSH with the rsync command, add the username@hostname before the destination file’s location. An example is show in Listing 16.15.

Listing 16.15: Using rsync to securely transfer a file over SSH

$ ls -sh Project4x.tar
40K Project4x.tar
$
$ rsync Project4x.tar [email protected]:~
Password:
$

In Listing 16.15, the Project4x.tar file is sent to a remote system using the rsync command and OpenSSH. Notice that the remote system’s username and IP address has an added colon (:). This is to designate that the file is being transferred to a remote system. If you did not add the colon, the rsync command would not transfer the file. It would simply rename the file to a file name with Christine@ and tack on the IP address too.

After the colon, the file’s directory destination is designated. The ~ symbol indicates to place the file in the user’s home directory. You could also give the file a new name, if desired.

You can also use the ssh command to send commands to a remote system. Just add the command, between quotation marks, to the ssh command’s end. An example is shown in Listing 16.16.

Listing 16.16: Using ssh to send a command to a remote system

$ ssh [email protected] "ls -sh Project4x.tar"
Password:
40K Project4x.tar
$

In Listing 16.16, the command checks if our file was properly transferred to the remote system. The Project4x.tar file was successfully moved.

Configuring SSH

It’s a good idea to review the various OpenSSH configuration files and their directives. Ensuring that your encrypted connection is properly configured is critical for securing remote system communications. Table 16.7 lists the primary OpenSSH configuration files.

Table 16.7 Primary OpenSSH configuration files

Configuration File Description
~/.ssh/config Contains OpenSSH client configurations. May be overridden by ssh command options.
/etc/ssh/ssh_config Contains OpenSSH client configurations. May be overridden by ssh command options or settings in the ~/.ssh/config file.
/etc/ssh/sshd_config Contains the OpenSSH daemon (sshd) configurations.

If you need to make SSH configuration changes, it is essential to know which configuration file(s) to modify. The following guidelines can help:

  • For an individual user’s connections to a remote system, create and/or modify the client side’s ~/.ssh/config file.
  • For every user’s connection to a remote system, create and modify the client side’s /etc/ssh/ssh_config file.
  • For incoming SSH connection requests, modify the /etc/ssh/sshd_config file on the server side.

images Keep in mind that in order for an SSH client connection to be successful, besides proper authentication, the client and remote server’s SSH configuration must be compatible.

There are several OpenSSH configuration directives. You can peruse them all via the man pages for the ssh_config and sshd_config files. However, there are a few vital directives for the sshd_config file:

  • AllowTcpForwarding: Permits SSH port forwarding. (See Chapter 8.)
  • ForwardX11: Permits X11 forwarding. (See Chapter 8.)
  • PermitRootLogin: Permits the root user to log in through an SSH connection. Typically, should be set to no.
  • Port: Sets the port number the OpenSSH daemon (sshd) listens on for incoming connection requests. (Default is 22.)

An example of why you might change the client’s ssh_config or ~/.ssh/config file is when the remote system’s SSH port is modified in the sshd_config file. In this case, if the client-side configuration files were not changed to match this new port, the remote user would have to modify their ssh command’s options. An example of this is shown snipped in Listing 16.17. In this listing, the remote Ubuntu server has OpenSSH listening on port 1138, instead of the default port 22, and the user must use the -p option with the ssh command to reach the remote server.

Listing 16.17: Using ssh to connect to a non-default port on a remote system

$ ssh -p 1138 192.168.0.104
[…]
[email protected]'s password:
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-36-generic x86_64)
[…]
Christine@Ubuntu1804:~$
Christine@Ubuntu1804:~$ ip addr show | grep 192.168.0.104
    inet 192.168.0.104/24 […]
Christine@Ubuntu1804:~$
Christine@Ubuntu1804:~$ exit
logout
Connection to 192.168.0.104 closed.
$

To relieve the OpenSSH client users of this trouble, create or modify the ~/.ssh/config file for individual users, or for all client users, modify the /etc/ssh/ssh_config file. Set Port to 1138 within the configuration file. This makes it easier on both the remote users and the system administrator.

images Often system admins will change the OpenSSH default port from port 22 to another port. On public-facing servers, this port is often targeted by malicious attackers. However, if you change the OpenSSH port on a system using SELinux, you’ll need to let SELinux know about the change. The needed change is often documented in the top of the /etc/ssh/sshd_config file on SELinux systems.

Generating SSH Keys

Typically, OpenSSH will search for its system’s public/private key pairs. If they are not found, OpenSSH automatically generates them. These key pairs, also called host keys, are stored in the /etc/ssh/ directory within files. Listing 16.18 shows key files on a Fedora distribution.

Listing 16.18: Looking at OpenSSH key files on a Fedora system

$ ls -1 /etc/ssh/*key*
/etc/ssh/ssh_host_ecdsa_key
/etc/ssh/ssh_host_ecdsa_key.pub
/etc/ssh/ssh_host_ed25519_key
/etc/ssh/ssh_host_ed25519_key.pub
/etc/ssh/ssh_host_rsa_key
/etc/ssh/ssh_host_rsa_key.pub
$

In Listing 16.18, both private and public key files are shown. The public key files end in the .pub file name extension, while the private keys have no file name extension. The file names follow this standard:

ssh_host_KeyType_key

The key file name’s KeyType corresponds to the digital signature algorithm used in the key’s creation. The different types you may see on your system are as follows:

  • dsa
  • rsa
  • ecdsa
  • ed25519

images It is critical that the private key files are properly protected. Private key files should have a 0640 or 0600 (octal) permission setting and be root owned. However, public key files need to be world readable. File permissions were covered in Chapter 15.

There may be times you need to manually generate these keys or create new ones. In order to do so, the ssh-keygen utility is employed. In Listing 16.19, a snipped example of using this utility is shown on a Fedora system.

Listing 16.19: Using ssh-keygen to create new public/private key pair

$ sudo ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
Generating public/private rsa key pair.
/etc/ssh/ssh_host_rsa_key already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
The key fingerprint is:
[…]
$

The ssh-keygen has several options. For the commands in Listing 16.19, only two are employed. The -t option sets the KeyType, which is rsa in this example. The -f switch designates the private key file to store the key. The public key is stored in a file with the same name, but the .pub file extension is added. Notice that this command asks for a passphrase, which is associated with the private key.

Authenticating with SSH Keys

Entering the password for every command employing SSH can be tiresome. However, you can use keys instead of a password to authenticate. A few steps are needed to set up this authentication method:

  1. Log into the SSH client system.
  2. Generate an SSH ID key pair.
  3. Securely transfer the public SSH ID key to the SSH server computer.
  4. Log into the SSH server system.
  5. Add the public SSH ID key to the ~/.ssh/authorized_keys file on the server system.

Let’s look at these steps in a little more detail. First, you should log into the client system via the account you will be using as the SSH client. On that system, generate the SSH ID key pair via the ssh-keygen utility. You must designate the correct key pair file name, which is id_TYPE, where TYPE is dsa, rsa, or ecdsa. An example of creating an SSH ID key pair on a client system is shown snipped in Listing 16.20.

Listing 16.20: Using ssh-keygen to create an SSH ID key pair

$ ssh-keygen -t rsa -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/Christine/.ssh/id_rsa.
Your public key has been saved in /home/Christine/.ssh/id_rsa.pub.
[…]
$
$ ls .ssh/
id_rsa  id_rsa.pub  known_hosts
$

Notice in Listing 16.20 the key file’s name. The ssh-keygen command in this case generates a private key, stored in the ~/.ssh/id_rsa file, and a public key, stored in the ~/.ssh/id_rsa.pub file. You may enter a passphrase if desired. In this case, no passphrase was entered.

Once these keys are generated on the client system, the public key must be copied to the server system. Using a secure method is best, and the ssh-copy-id utility allows you to do this. Not only does it copy over your public key, it also stores it in the server system’s ~/.ssh/authorized_keys file for you. In essence, it completes steps 3 through 5 in a single command. A snipped example of using this utility is shown in Listing 16.21.

Listing 16.21: Using ssh-copy-id to copy the SSH public ID key

$ ssh-copy-id -n [email protected]
[…]
Would have added the following key(s):

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsP[…]
8WJVE5RWAXN[…]
=-=-=-=-=-=-=-=
$ ssh-copy-id  C
[…]Source of key(s) to be installed: "/home/Christine/.ssh/id_rsa.pub"
[…]
[email protected]'s password:

Number of key(s) added: 1
[…]
$

Notice in Listing 16.21 that the ssh-copy-id -n command is employed first. The -n option allows you to see what keys would be copied and installed on the remote system without actually doing the work (a dry run).

The next time the command is issued in Listing 16.21, the -n switch is removed. Thus, the id_rsa.pub key file is securely copied to the server system, and the key is installed in the ~/.ssh/authorized_keys file. Notice that when using the ssh-copy-id command, the user must enter their password to allow the public ID key to be copied over to the server.

Now that the public ID key has been copied over to the SSH server system, the ssh command can be used to connect from the client system to the server system with no need to enter a password. This is shown along with using the scp command in Listing 16.22. Note that at the IP address’s end, you must add a colon (:) when using the scp command to copy over files.

Listing 16.22: Testing out password-less SSH connections

$ ssh [email protected]
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-36-generic x86_64)
[…]
Christine@Ubuntu1804:~$ ls .ssh
authorized_keys  known_hosts
Christine@Ubuntu1804:~$
Christine@Ubuntu1804:~$ exit
logout
Connection to 192.168.0.104 closed.
$
$ scp Project4x.tar [email protected]:~
Project4x.tar      100%   40KB   6.3MB/s   00:00
$
$ ssh [email protected]
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-36-generic x86_64)
[…]
Christine@Ubuntu1804:~$ ls
Desktop    Downloads         Music     Project4x.tar  Templates
Documents  examples.desktop  Pictures  Public         Videos
Christine@Ubuntu1804:~$ exit
logout
Connection to 192.168.0.104 closed.
$

 

images If your Linux distribution does not have the ssh-copy-id command, you can employ the scp command to copy over the public ID key. In this case you would have to manually add the key to the bottom of the ~/.ssh/authorized_keys file. To do this you can use the cat command and the >> symbols to redirect and append the public ID key’s standard output to the authorized keys file.

Authenticating with the Authentication Agent

Another method to connect to a remote system with SSH is via the authentication agent. Using the agent, you only need to enter your password to initiate the connection. After that, the agent remembers the password during the agent session. A few steps are needed to set up this authentication method:

  1. Log into the SSH client system.
  2. Generate an SSH ID key pair and set up a passphrase.
  3. Securely transfer the public SSH ID key to the SSH server computer.
  4. Log into the SSH server system.
  5. Add the public SSH ID key to the ~/.ssh/authorized_keys file on the server system.
  6. Start an agent session.
  7. Add the SSH ID key to the agent session.

Steps 1 through 5 are nearly the same steps performed for setting up authenticating with SSH ID keys instead of a password. One exception to note is that a passphrase must be created when generating the SSH ID key pair for use with an agent. An example of setting up an ECDSA key to use with an SSH agent is shown snipped in Listing 16.23.

Listing 16.23: Generating and setting up an ID key to use with the SSH agent

$ ssh-keygen -t ecdsa -f ~/.ssh/id_ecdsa
Generating public/private ecdsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/Christine/.ssh/id_ecdsa.
[…]
$ ssh-copy-id  -i ~/.ssh/id_ecdsa C
[…]
Number of key(s) added: 1
[…]
$

Once you have the key pair properly created with a passphrase on the remote system, securely transmitted, and installed on the server’s authorized key file, you can employ the ssh-agent utility to start an SSH agent session. After the session is started, add the private ID key to the session via the ssh-add command. A snipped example of this is shown in Listing 16.24.

Listing 16.24: Starting an SSH agent session and adding an ID key

$ ssh-agent /bin/bash
[Christine@localhost ~]$
[Christine@localhost ~]$ ssh-add ~/.ssh/id_ecdsa
Enter passphrase for /home/Christine/.ssh/id_ecdsa:
Identity added: /home/Christine/.ssh/id_ecdsa (/home/Christine/.ssh/id_ecdsa)
[Christine@localhost ~]$
[Christine@localhost ~]$ ssh C
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-36-generic x86_64)
[…]
Christine@Ubuntu1804:~$ exit
logout
Connection to 192.168.0.104 closed.
[Christine@localhost ~]$
[Christine@localhost ~]$ exit
exit
$

Notice in Listing 16.24 that the ssh-agent command is followed by /bin/bash, which is the Bash shell. This command starts a new session, an agent session, with the Bash shell running. Once the private SSH ID key is added using the ssh-add command and entering the private passphrase, you can connect to remote systems without entering a password or passphrase again. However, if you exit the agent session and start it up again, you must re-add the key and reenter the passphrase.

images The ssh-add command allows you to remove ID within an agent session, if so desired. Include the -d option to do so.

An SSH agent session allows you to enter the session one time and add the key, then connect as often as needed to remote systems via encrypted SSH methods without entering a password or passphrase over and over again. Not only does this provide security, it provides convenience, which is a rare combination.

Using SSH Securely

There are a few things you can do to enhance SSH’s security on your systems:

  • Use a different port for SSH than the default port 22.
  • Disable root logins via SSH.
  • Manage TCP Wrappers.

One item touched upon earlier in this chapter is not using port 22 as the SSH port for any public-facing systems. You change this by modifying the Port directive in the /etc/ssh/sshd_config file to another port number. Keep in mind that there are advantages and disadvantages to doing this. It may be a better alternative to beef up your firewall as opposed to changing the default SSH port.

Another critical item is disabling root login via SSH. By default, any system that allows the root account to log in and has OpenSSH enabled permits root logins via SSH. Because root is a standard username, malicious attackers can use it in brute-force attacks. Since root is a super user account, it needs extra protection.

To disable root login via SSH, edit the /etc/ssh/sshd_config file. Set the PermitRootLogin directive to no, and either restart the OpenSSH service or reload its configuration file.

TCP Wrappers are an older method for controlling access to network-based services. If a service can employ TCP Wrappers, it will have the libwrap library compiled with it. You can check for support by using the ldd command as shown snipped in Listing 16.25. In this listing on an Ubuntu system, you can see that TCP Wrappers can be used by the SSH service.

Listing 16.25: Using the ldd command to check for TCP Wrappers support

$ which sshd
/usr/sbin/sshd
$
$ ldd /usr/sbin/sshd | grep libwrap
        libwrap.so.0 […]
$

TCP Wrappers employ two files to determine who can access a particular service. These files are /etc/hosts.allow and /etc/hosts.deny. As you can tell by their names, the hosts.allow file typically allows access to the designated service, while the hosts.deny file commonly blocks access. These files have simple record syntax:

service: IPaddress…

The search order of these files is critical. For an incoming service request, the following takes place:

  • The hosts.allow file is checked for the remote IP address.
    • If found, access is allowed, and no further checks are made.
  • The hosts.deny file is checked for the remote IP address.
    • If found, access is denied.
    • If not found, access is allowed.

Because access is allowed if the remote system’s address is not found in either file, it is best to employ the ALL wildcard in the /etc/hosts.deny file:

ALL: ALL

This disables all access to all services for any IP address not listed in the /etc/hosts.allow file. Be aware that some distributions use PARANOID instead of ALL for the address wildcard.

The record’s IPaddress can be either IPv4 or IPv6. To list individual IP addresses in the hosts.allow file, you specify them separated by commas as such:

sshd: 172.243.24.15, 172.243.24.16, 172.243.24.17

Typing in every single IP address that is allowed to access the OpenSSH service is not necessary. You can specify entire subnets. For example, if you needed to allow all the IPv4 addresses in a Class C network access on a server, you specify only the first three address octets followed by a trailing dot as such:

sshd: 172.243.24.

 

images TCP Wrappers were created prior to the time administrators used firewalls. While they are still used by some, their usefulness is limited, and they are considered deprecated by many distributions. It is best to move this protection to your firewall.

Using VPN as a Client

While SSH is great for securely connecting from a client to a server on the same local network, it is not as useful for accessing a remote system over a public network. Fortunately virtual private networks (VPNs) work well in this situation. A VPN establishes a secure encrypted connection between two systems on separate networks with a public network between them. The encrypted connection acts as a separate private network, allowing you to pass any type of data between the two systems securely. There are many different VPN packages available on Linux, such as OpenVPN.

When choosing software that will provide VPN as a client, it is vital to understand what security methods a package employs. Making good VPN choices is critical for keeping your virtual network private. In addition, you should consider the data packet transportation method. When using a VPN, often UDP-based systems offer better performance over TCP-based systems.

SLS/TLS SLS/TLS is actually the same secure communication protocol. Originally it was called SSL (Secure Sockets Layer). As the protocol advanced and improved through time, the name was changed to TLS (Transport Layer Security). As long as you are using a current version, this protocol provides secure data encryption over a network between systems. Your VPN client application should use TLS 1.2 at a minimum. Earlier versions of the protocol have known problems.

TLS is a stream-oriented protocol that prevents man-in-the-middle attacks. It employs symmetric encryption for the data and a public key for confirming the system’s identity. Data includes a message authentication code to prevent alteration during transmission. In addition, TLS has restrictions that curb captured data from being replayed at a later time, called a replay attack.

images Point-to-Point Tunneling Protocol (PPTP) is an older protocol that has many documented weaknesses. It is vulnerable to man-in-the-middle attacks, and therefore any VPN client using this protocol should not be implemented on your system.

DTLS Datagram Transport Layer Security (DTLS) is also a secure communication protocol, but it is designed to employ only UDP packets. Thus, it is sometimes known as the UDP TLS. With TPC, which is a connection-based protocol, additional communication takes place to establish the connection. Because UDP is a connectionless protocol, DTLS is faster, and it does not suffer the performance problems of other stream-based protocols.

DTLS is based upon SSL/TLS, and it provides similar security protections. Thus, it is favorable to use for VPN software.

IPSec Internet Protocol Security (IPSec) is not a cryptographic protocol but a framework that operates at the Network layer. By itself, it does not enforce a particular key method or encryption algorithm. It is typically at a VPN application’s core.

It employs the Authentication Header (AH) protocol for authentication. IPSec also uses the Encapsulating Security Payload (ESP) for authentication, data encryption, data integrity, and so on. For key management, typically the Internet Security Association and Key Management Protocol (ISAKMP) is employed, but it’s not required.

IPSec has two modes, which are tunnel mode and transport mode. In tunnel mode, all the data and its associated headers added for transportation purposes (called a datagram) are protected. Thus, no one can see any data or routing information because the entire connection is secured. In transport mode, only the data is protected, and it is secured by ESP.

images The OpenVPN package uses a custom protocol, sometimes called the OpenVPN protocol. It does, however, use SSL/TLS for its key exchange. This software product is multiplatform and does not have problems with establishing VPNs through firewalls and NATs, like IPSec has known to suffer. Therefore, the OpenVPN package is very popular.

There are many good choices for secure VPN clients. Creating a checklist of your environment’s required features is a good place to start.

Summary

Assessing your system’s and users’ needs for appropriate access and authentication methods is vital for securing your system. Using the correct products and configuring them correctly not only helps to keep systems secure, it provides less frustration for your users. It makes your job easier as well.

Exam Essentials

Summarize various PAM modules and features. PAM is a one-stop shop for various applications to implement authentication services. For an application to use PAM, it must be compiled with the libpam.so module and have an associated PAM configuration file. The configuration files are located in the /etc/pam.d/ directory. Applications can enforce strong passwords employing any of the three PAM modules—pam_unix.so, pam_pwhistory.so, and pam_pwquality.so (the latter of which was formerly called pam_cracklib.so). PAM can also provide account lockouts to protect against brute-force attacks. This is accomplished via the pam_tally.so or pam_faillock.so module, depending on the system’s distribution. If your environment incorporates LDAP, it also can be integrated with PAM. The PAM module to do so is the pam_ldap.so module.

Describe PKI and its components. PKI protects cipher key integrity. This framework includes the CA structure, which validates a person’s or device’s identity and provides a signed digital certificate. The certificate includes a public key and can be sent to others so they can verify that the public key is valid and does truly come from the certificate holder. Self-signed certificates are available but should only be used for testing purposes. Symmetric key encryption uses only a private key for both encrypting and decrypting data. Asymmetric key encryption uses a public/private key pair, where commonly the public key is used for encryption and the private key is used for decryption. Hashing data prior to encryption and then encrypting the produced message digest allows you to add a digital signature to your transmitted encrypted data. It provides a means of data integrity.

Explain the various SSH features and utilities. The OpenSSH application provides SSH services via the ssh command and sshd daemon. To configure SSH client connections, you can either use ssh command-line options or employ the ~/.ssh/config or /etc/ssh/ssh_config file. For the server side, the configuration file is /etc/ssh/sshd_config. When you initially establish an SSH connection from a client to a remote SSH server, the server’s key information is stored in the ~/.ssh/known_hosts file. If keys need to be regenerated or you are setting up a password-less login, you can employ the ssh-keygen utility to create the needed keys. When you are setting up a password-less login, two files should be created, which are located in the ~/.ssh/ directory and named id_rsa and id_rsa.pub. The public key is copied to the SSH server system and placed in the ~/.ssh/authorized_keys file via the ssh-copy-id command. An alternative is to use the ssh-agent and add the needed key via the ssh-add command.

Compare the various VPN client security implementations. Typically used when needed to traverse a public network, VPN software establishes a secure encrypted connection between two systems. The protocols involved may be SLS/TLS, DTLS, and IPSec. The SLS/TLS protocol is stream-oriented and protects against man-in-the-middle attacks. DTLS only uses UDP packets, which makes it faster than TCP packet-only protocols. IPSec operates at the Network layer. It provides two modes—tunnel mode and transport mode. OpenVPN is the most popular VPN software; it uses its own proprietary protocol but employs SLS/TLS for the key exchange.

Review Questions

  1. For an application to use PAM, it needs to be compiled with which PAM library?

    1. ldd
    2. pam_nologin.so
    3. pam_unix.so
    4. libpam
    5. pam_cracklib
  2. Which of the following are PAM control flags? (Choose all that apply.)

    1. requisite
    2. required
    3. allowed
    4. sufficient
    5. optional
  3. Which of the following will display failed login attempts? (Choose all that apply.)

    1. tally2
    2. pam_tally2
    3. pam_tally2.so
    4. pam_faillock
    5. faillock
  4. Leigh encrypts a message with Luke’s public key and then sends the message to Luke. After receiving the message, Luke decrypts the message with his private key. What does this describe? (Choose all that apply.)

    1. Symmetric key encryption
    2. Asymmetric key encryption
    3. Public/private key encryption
    4. Secret key encryption
    5. Private key encryption
  5. Which of the following best describes a digital signature?

    1. Plain text that has been turned into ciphertext
    2. Ciphertext that has been turned into plain text
    3. A framework that proves authenticity and validation of keys as well as the people or devices that use them
    4. A digital certificate that is not signed by a CA but by an end user
    5. An original plaintext hash, which is encrypted with a private key and sent along with the cipher text
  6. The OpenSSH application keeps track of any previously connected hosts and their public keys in what file?

    1. ~/.ssh/known_hosts
    2. ~/.ssh/authorized_keys
    3. /etc/ssh/known_hosts
    4. /etc/ssh/authorized_keys
    5. /etc/ssh/ssh_host_rsa_key.pub
  7. Which of the following are OpenSSH configuration files? (Choose all that apply.)

    1. ~./ssh/config
    2. /etc/ssh/ssh_config
    3. /etc/ssh/sshd_config
    4. /etc/sshd/ssh_config
    5. /etc/sshd/sshd_config
  8. Which of the following files may be involved in authenticating with SSH keys?

    1. /etc/ssh/ssh_host_rsa_key
    2. /etc/ssh/ssh_host_rsa_key.pub
    3. ~/.ssh/id_rsa_key
    4. ~/.ssh/id_rsa_key.pub
    5. ~/.ssh/id_rsa
  9. Which of the following is true concerning TCP wrappers? (Choose all that apply.)

    1. The /etc/hosts.allow file is consulted first.
    2. The /etc/hosts.allow file should contain ALL: ALL to provide the best security.
    3. If an application is compiled with the libwrap library, it can employ TCP Wrappers.
    4. IP addresses of remote systems can be listed individually or as entire subnets.
    5. TCP Wrappers are considered to be deprecated by many distributions and firewalls should be used instead.
  10. Which of the following protocols or frameworks might be involved in using VPN software as a client? (Choose all that apply.)

    1. Tunnel
    2. SSL/TLS
    3. Transport
    4. IPSec
    5. DTLS
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset