Resource limits

A common hack is the (Distributed) denial-of-service ((D)DoS) attack. Here, the malicious attacker attempts to consume, indeed overload, resources on the target system to such an extent that the system either crashes, or at the very least, becomes completely unresponsive (hung).

Interestingly, on an untuned system, performing this type of attack is quite easy; as an example, let's imagine we have shell access (not root, of course, but as a regular user) on a server. We could attempt to have it run out of disk space (or at least run short) quite easily by manipulating the ubiquitous dd(1) (disk dump) command. One use of dd is to create files of arbitrary lengths.

For example, to create a 1 GB file filled with random content, we could do the following:

$ dd if=/dev/urandom of=tst count=1024 bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.2602 s, 70.4 MB/s
$ ls -lh tst
-rw-rw-r-- 1 kai kai 1.0G Jan 4 12:19 tst
$

What if we bump the blocksize (bs) value to 1G, like this:

dd if=/dev/urandom of=tst count=1024 bs=1G

dd will now attempt to create a file that is 1,024 GBa terabytein size! What if we run this line (in a script) in a loop? You get the idea.

To control resource-usage, Unix (including Linux) has a resource limit, that is, an artificial limit imposed upon a resource by the OS.

A point to be clear on from the very beginning: these resource limits are on a per-process basis and not system-wide globalsmore on this in the next section.

Before diving into more detail, let's continue with our hack example to eat up a system's disk space, but this time with the resource limit for the maximum size of a file set in place beforehand.

The frontend command to view and set resource limits is a built-in shell command (these commands are called bash-builtins): ulimit. To query the maximum possible size of files written to by the shell process (and its children), we set the -f option switch to ulimit:

$ ulimit -f
unlimited
$

Okay, it's unlimited. Really? No, unlimited only implies that there is no particular limit imposed by the OS. Of course it's finite, limited by the actual available disk space on the box.

Let's set a limit on the maximum file size, simply by passing the -f option switch and the actual limit. But what's the unit of the size? bytes, KB, MB? Let's look up its man page: by the way, the man page for ulimit is the man page for bash(1). This is logical, as ulimit is a built-in shell command. Once in the bash(1) man page, search for ulimit; the manual informs us that the unit (by default) is 1,024-byte increments. Thus, 2 implies 1,024*2 = 2,048 bytes. Alternatively, to get some help on ulimit, just type help ulimit on the shell.

So, let's try this: reduce the file size resource limit to just 2,048 bytes and then test with dd:

Figure 1: A simple test case with ulimit -f

As can be seen from the preceding screenshot, we reduce the file size resource limit to 2, implying 2,048 bytes, and then test with dd. As long as we create a file at or below 2,048 bytes, it works; the moment we attempt to go beyond the limit, it fails.

As an aside, note that dd does not attempt to use some clever logic to test the resource limit, displaying an error if it were to attempt to create a file over this limit. No, it just fails. Recall from Chapter 1, Linux System Architecture, the Unix philosophy principle: provide mechanisms, not policies!
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset