Section 9: Security

9.1 Mongrel’s Security Design

Back in the evil dark days of the WWW it was believed that a Web server should bend over backwards to accommodate any client written by anyone at any time no matter what the quality. For this reason most Web servers will allow nearly any malicious payload you send at them to pass right on through to your applications and network. Typically the only way to protect yourself is to create a blacklist[17]of known attacks and block them with special security tools.

Mongrel takes a more proactive and strict approach to Web server security. As an example, Mongrel’s HTTP processing is handled by a parser that has strict grammar specifications and exact size limits on all elements. Turns out if you simply reject anything that doesn’t follow the HTTP 1.1 grammar, then you reject most security attacks without effort.[18]

In addition to using a parser, Mongrel has extensive testing (at least 90% coverage), a full security policy, and a frequent audit process that involves some advanced auditing and validation techniques such as “fuzzing.” By following a consistent security policy and using the most advanced testing tools available, the Mongrel team is able to keep the number of potential security defects down, and be proactive about it rather than reactive. While it’s impossible to say Mongrel is completely secure, you can at least verify what policies are in place and make your own judgments.

9.1. Mongrel’s Security Design

9.1.1. Strict HTTP 1.1 Parsing

Today’s Web servers use hand-written parsers that are incredibly lax about processing their input despite HTTP 1.1 having a fully specified allowed grammar. These Web servers are in effect using a blacklist to determine what’s a valid request, “We accept everything, oh except that, oh and that, oh that too.”

Mongrel uses a strict HTTP 1.1 parser generated by the fantastic Ragel generator. This parser is not so strict that slightly wrong clients are rejected, so don’t worry about losing customers. It is strict, however, in the sizes of each element allowed, the grammar of the important parts, and its grammar is directly comparable with the HTTP 1.1 specification.[19]

Mongrel’s use of a parser changes the Web server input security policy from a blacklist to a whitelist. Mongrel is telling the world, “I reject everything except this exact grammar specification.” Not only is Mongrel’s parser able to reject large numbers of malicious requests without any special "security lists," it is also very fast thanks to Ragel.[20]


Zed Sez

image All right, pretty people, listen up. You went to school one day (or maybe you didn’t, lucky you) and some professor told you that a parser written by hand is faster than what a parser generator (like Ragel) creates. You assumed he was right and never ever, ever questioned him again and will probably die still thinking, "Gee, parsers written by hand are super fast." Parser generators have been under twenty or thirty years of continuous development and research, and can now beat the pants off nearly anything some dip at an Apache group can write by hand. Why? Not because they are faster but because they are correct. Mongrel has shown that the speed is only marginally different, that the parser is not the bottleneck (it’s IO, dumbass), and that having the ability to explicitly kill badly behaving clients is the best way to protect against malicious attacks. Most important, though, every hand-written parser (and HTTP clients or servers) starts off really simple, then blossoms into a giant morass of horribly twisted crap that houses all the major security holes. So, take your panties out of their tightly coiled bunch and move on.


9.1.2. Request Size Limitations

The HTTP 1.1 standard doesn’t set any limits on anything except maybe the size of allowed cookies (and even that is very loose). You could put a full-length DVD in a header and still technically be correct (the best kind of correct). For a little server running in an interpreted language like Ruby, this is a murderous path to destruction. Without limits on requests, a malicious attacker could easily craft requests that exceeded the server’s available memory and potentially cause buffer overflows.

In addition to strictly processing all input in exactly sized elements, Mongrel also has a set of hardcoded size limits. These limits are fairly large for practical purposes, but some people really like to stretch the limits of the standard. The limitations as of 0.3.x are:

Image field names 256 bytes

Image field values 80k bytes

Image request uri 2k bytes

Image request path 12k bytes

Image query string 10k bytes

Image total header 112k bytes

Since Mongrel’s creation nobody has complained about these limits. It’s quite possible someone may hit them in the future, but simply not using these elements to store their data is much easier than trying to get Mongrel’s limits changed.

When you do exceed one of these limits, Mongrel reports an error message to the console or mongrel.log telling you what limit was triggered.

9.1.3. Limiting Concurrent Processors

Mongrel places a limit on the number of concurrently connected clients due mostly to a limitation in Ruby’s ability to handle files. If a client connects and there are already too many connected, then Mongrel closes that client and starts trying to kill off any currently running threads that may be too old. It also logs messages saying this was necessary so that you can update your deployment to handle the new load factor.

The alternative to doing this is to simply let Mongrel die. By rejecting some clients and telling you that there’s an overload problem, you can still service a portion of your user base until you can scale you deployment up.

9.1.4. No HTTP Pipelining or Keep-Alives

In HTTP 1.1 a feature called "pipelining" was introduced and the request/response model was changed to a "persistent connection" model using keep-alives. This was potentially useful back when making connection requests was expensive over phone lines, but in modern times the benefits are actually placing more load on the HTTP server, sucking up precious available concurrent processors so other people can’t enjoy them.

Mongrel is also at the mercy of Ruby’s file I/O limitations. There are plans to improve how Ruby processes files, but right now it can only keep 1,024 open files at a time on most systems (even fewer on some). Allowing clients to keep a connection constantly means that a malicious attacker can simply make a series of "trickled connects" until your Mongrel processes run out of available files. It literally takes seconds to do this.

Another questionable point about HTTP pipelining and keep-alives is that there seem to be no statistically significant performance benefits with modern networking equipment or localhost configurations (which Mongrel is almost always deployed in). The complexity increase implementing this part of the standard simply doesn’t justify the almost non-existent benefit in today’s deployment scenarios.

Rather than implement more and more complexity, Mongrel simply uses the special "Connection: close" response header to indicate to the client and any proxy servers that it’s done. This reverts Mongrel back to the HTTP 1.0 behavior of one request per connection, and actually does improve its performance under heavy load for the above reasons. It’s possible that this might change in the future, but for now it works out great and only HTTP RFC Nazis seem to care.[21]


Zed Sez

image Your application is slow as dirt and you are convinced that you need keep-alives and pipelining to make it go faster. You’ve got to have it or nothing will work, the world will crumble to dust, and we’ll all be replaced by intelligent squids after the human race is long gone. It’s that serious.

Look, you don’t need any of that simply because this isn’t 1996. Back in the day keep-alives and pipelining were added so that people on phone lines didn’t have to wait, but the assumption that this is necessary in all networking situations has never been tested heavily. These parts of the RFC only add complexity, are ambiguous, and don’t really make things fast enough to justify the development and maintenance overhead for a server that’s run on localhost or controlled networks.

Most important, though, is that allowing them makes it so that a malicious attacker just has to start up more than 500 keep-alive or pipelined requests and your whole Ruby application eats it. Why? Because Ruby 1.8.x uses the select() function to do its IO and that only supports 1,024 open files on most systems.

Another problem, though, is that people use keep-alives and pipelining as a Band-Aid for horribly designed systems. I ran into one fellow who desperately needed keep-alives so he could send three characters per request to a client at 2,000 requests/second. Yes, just three characters. Here’s a clue: batched processing. In almost every instance that people have claimed to need these features, a simple design change to batched processing or an improved network design removed the problem and simplified everything.


9.1.5. No SSL

Mongrel does not support SSL. Typically you will be running Mongrel behind a static Web server (see Section 4), in which case you’ll have SSL support from there. Make sure you are sending headers to Mongrel properly so that it will respond with HTTPS instead of HTTP.


Zed Sez

image You don’t need SSL either. Well, you need it at some point in your deployment configuration, but why would you want a slow language like Ruby to do the SSL when there are already really great, fast Web servers? Ruby is a precious, slow commodity that you should use only when you absolutely must.


9.1.6. No [Name Your Must-Have Feature]

As you probably see, Mongrel say, "No" in many places where most Web servers say, "Yes, OK." Sometimes this is because no one using Mongrel has needed it yet, sometimes it’s because there’s a better, simpler way to accomplish the same goal. Mongrel is a different kind of Web server, and frequently you can solve your problem with a different solution.

Ready for another catchphrase? Constraints are liberating. When you are given a billion different options, you can become paralyzed by choice. When you are forced to work within reasonable constraints, you can focus on the problem of getting your task done rather than deciding how to configure your Web server. Mongrel’s feature set and limitations work perfectly for about 95% of the current users. Those people who have additional requirements can easily extend Mongrel using the plugins, commands, and handlers. The rest should find a tool better-suited to their job or change their requirements.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset