Chapter 2. Current Software Development Methods Fail to Produce Secure Software

In this chapter:

Software engineering companies and companies creating their own lines of business software have been looking forever for the classic “silver bullet” to deliver great quality software on time and under budget. As Fred Brooks mentions in the classic text The Mythical Man-Month, there is no such thing as the software silver bullet (Brooks 1995). The same lack of an easy solution applies to software security. In fact, we’re going to go one step further and say that present software engineering practice in the industry does not lead to secure software at all. If any of the current state-of-the-art processes did create secure software, we’d simply see fewer security errata and bulletins from software vendors. But the industry suffers a huge security problem; everyone has security bugs, often very bad security bugs. This leads us to conclude that present security practices don’t create secure code.

In this chapter, we’ll look at a number of software development and certification processes to outline why they do not create code that is secure from attack. We’ll look at the following:

Let’s look at these in detail, outlining why each method does not produce more secure software.

“Given enough eyeballs, all bugs are shallow”

First discussed by Eric Raymond in his well-known paper “The Cathedral and the Bazaar,” this is the battle cry of the open source movement (Raymond 1997). The more formal definition of the slogan, as expressed in the paper, is as follows:

Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

Now before we start a religious debate, we want to explain something. Both authors have a deep respect for the open source software community; we both believe that opening source code is of value to some customers and users and that the ability to change code also has benefits for a few customers. But the software produced by the open source community is not secure from attack, and it most certainly is not secure simply because the code can be reviewed by many people. The concept of “Given enough eyeballs, all bugs are shallow” is wrong on many fronts: it assumes that people reviewing the code are motivated to review the code, that the people doing the reviews know what security bugs are, and that there is a critical mass of informed and motivated reviewers. But more important, and as we’ll see, it just misses the point altogether. Let’s look at each aspect in detail.

Incentive to Review Code

The author of this chapter (Michael) has worked with thousands of developers, teaching them how to review code and designs for security bugs. He has also reviewed more code than he cares to remember. And if he’s learned one thing from all that experience, it’s that most people don’t enjoy reviewing code for bugs and vulnerabilities. Given the choice of reviewing code for bugs—including security bugs—or working on the newest feature in an upcoming software product, developers will choose writing new code. Developers are creative, and creating new features is the epitome of inventiveness. Another reason for not wanting to review code is that the task is slow, tiresome, and boring.

Note

Note

Based on analysis at Microsoft, at most an average developer can review about 1,500 lines of C code a day or 1,100 lines of C++ code a day looking for deep security bugs. It is, of course, possible to review code more quickly than this, but the quality of the review might suffer.

We have seen evidence of the distinct lack of will to review code in the open source community. For example:

The promise of open source is that it enables many eyes to look at the code, but in reality, that doesn’t happen.

Crispin Cowan (Cowan 2002)

Also, do not lose sight of a very simple maxim: the quality of code review—in other words, the ability to find real bugs versus finding false positives or missing bugs—is proportional to the code size under review. More code to review means you must have even more knowledgeable and motivated people reviewing the code.

Understanding Security Bugs

Understanding security vulnerabilities is critically important and is covered in detail in Chapter 5. If your engineers do not know what constitutes a security bug, they will find none when reviewing the design of a component or the code underlying the design. As an example, unless you know what an HTTP Response Splitting attack is (Watchfire 2005), you won’t see the security bug in the following code:

<% @ LANGUAGE=VBSCRIPT CODEPAGE = 1252 %>
<!--#include file="constant.inc"-->
<!--#include file="lib/session.inc"-->
<% SendHeader 0, 1 %>
<!--#include file="lib/getrend.inc"-->
<!--#include file="lib/pageutil.inc"-->

<%
'<!-- Microsoft Outlook Web Access-->
'<!-- Root.asp : Frameset for the Inbox window -->
'<!-- Copyright (c) Microsoft Corporation 1993-1997. All rights reserved.-->
On Error Resume Next
If Request.QueryString("mode") <> "" Then
     Response.Redirect bstrVirtRoot + _
         "/inbox/Main_fr.asp?" + Request.QueryString()
End If

This coding bug in the Microsoft Outlook Web Access component of Microsoft Exchange Server 5.5 is what led Microsoft to release a security bulletin, MS04-026 (Microsoft 2004). This kind of bug can lead to numerous security issues.

By the way, the coding bug is in the line that starts Response.Redirect.

Critical Mass

Next, the issue of critical mass: there must be enough knowledgeable people reviewing enough of the code often enough. Yes, there might very well be many people with security expertise working on some of the larger projects such as Apache and the Linux kernel, but that’s an incredibly small number of people compared to the sheer volume of software being created that needs reviewing.

But let’s assume for a moment that there is a critical mass of people who understand security bugs and are prone to audit code for security errors. You would think that it would be appropriate to change the “many eyes” mantra to “A critical mass of experienced and willing eyes makes all bugs shallow,” but again, it misses the point of software engineering processes, our next topic.

“Many Eyeballs” Misses the Point Altogether

The goal of a good development process that leads to quality software is to reduce the chance that a designer, an architect, or a software developer will insert a bug in the first place. A bug that is not entered during the development process is a bug that does not need removing and that does not adversely affect customers. Make no mistake, there is absolutely a critical need for code review, but simply “looking for bugs” does not lead to a sustainable stream of secure software. Throwing the code “over the wall” for others to review for security bugs is wasted effort. A goal of the Security Development Lifecycle (SDL) is to reduce the chance that someone will enter security bugs from the outset.

In late 2004, the author of this chapter made a point in his blog about the number of security bugs in Microsoft Internet Information Services (IIS) 5 and IIS 6 and Apache 1.3.x and Apache 2.0.x, noting that because of the SDL, IIS 6 has had substantially fewer security bugs than IIS 5, but Apache 2.0.x had more than Apache 1.3.x (Howard 2004). A comment entered by an open source advocate named “Richard” (on blog page http://blogs.msdn.com/michael_howard/archive/2004/10/15/242966.aspx) was, “Apache 2 is new. It is an immature product and is less secure because of it.”

Apache 2.0.35 was the first “General Availability” release of Apache 2.0.x and was made available April 2002. It may be new relative to Apache 1.3.x, but it most certainly is not new. The belief that, over time, open source code quality will improve is a pretty typical view in the open source community. It may be true, but it is a naïve viewpoint: customers don’t want code that will be of good quality in due course; they want more secure products from the outset.

Finally, most security professionals agree that the concept of “many eyeballs” leading to secure code is incorrect. Following are some quotes from well-known open source security experts.

“Experience shows this simply isn’t true,” the research firm states, calling it “the myth of more eyes,” citing case after case where no one spotted critical flaws in open source code.

Network World, citing a Burton Group report (Burton 2005)

Now, I’m not going to throw any of that “many eyeballs” nonsense at you—much of the code we use never gets audited.

Jay Beale, Bastille Linux (Beale 2002)

Unless there’s a great deal of discipline underlying the development, there’s no difference in the security. Open source is not inherently more secure.

Peter Neumann, principal scientist, SRI International (eWeek 2002)

In short, there is no empirical evidence whatsoever that “many eyes” lead to secure software. There is a great deal of opinion—but no hard facts—to back up the claim. In fact, a great deal of evidence exists to show that the “many eyes” concept does not lead to secure software. Take the preceding scenario—Apache 2.0 versus Apache 1.3 and IIS 5 versus IIS 6—as an example. A lack of motivation to review old code (instead of developing new code) and a lack of systematic security training for developers and testers has helped create this reality, as well as a lack of discipline in the profession to exploit lessons learned and discovered vulnerabilities.

Finally, numerous security bugs have existed in open source software for years, such as the following:

  • 15 years. Sendmail e-mail server (CVE-2003-0161)

  • 10 years. MIT’s Kerberos authentication protocol (CVE-2003-0060)

  • 7 years. SAMBA file and print (CVE-2003-0085)

  • 5 years. MIT’s Kerberos authentication protocols (CVE-2005-1689)

  • 5 ½ years. Eric Raymond’s Fetchmail e-mail server (CVE-2002-0146)

Important

Important

Each bug in the preceding list is identified using a unique value assigned by MITRE Corporation. Some IDs start with CVE and some with CAN, so if you can’t find, for example, CVE-2002-0146, try CAN-2002-0146. A link to each of these bugs is given at the end of this chapter in the "References" section.

Admittedly, closed source software security bugs can linger unseen for years. But it’s not the closed source developers making the “many eyes” claim.

Again, we want to stress that this is not a slam against the open source community; “many eyes” is simply a myth that needs dispelling for the open source community to move onto producing better, more secure products. Why? Again, the skills aren’t there, the motivation isn’t there, and there is little sign of process improvement. Until the development processes improve in the open source community, no major decrease in the staggering number of security bugs will occur. And that’s simply not good for customers.

Proprietary Software Development Methods

Each commercial software company has its own development method; some follow a classic waterfall model (Wikipedia 2002a), some use a spiral model (Wikipedia 2002b), some use the Capability Maturity Model, now referred to as Capability Maturity Model Integration (CMMI) (Carnegie Mellon 2000), some use Team Software Process (TSP) and the Personal Software process (PSP) (Carnegie Mellon 2003), and others use Agile methods. There is no evidence whatsoever that any of these methods create more secure software than another internal development method, judging by the number of security bugs fixed by commercial software companies such as IBM, Oracle, Sun, and Symantec each year that require customers to apply patches or change configurations. In fact, many of these software development methods make no mention of the word “security” in their documentation. Some don’t even mention the word “quality” very often, either.

CMMI, TSP, and PSP

The key difference between the SDL and CMMI/TSP/PSP processes is that SDL focuses solely on security and privacy, and CMMI/TSP/PSP is primarily concerned with improving the quality and consistency of development processes in general—with no specific provisions or accommodations for security. Although certainly a worthy goal, this implicitly adopts the logic of “if the bar is raised on quality overall, the bar is raised on security quality accordingly.” Although this may or may not be true, we don’t feel that sufficient commercial development case study evidence exists to confirm or refute this either way. Our collective experiences from SDL are that adopting processes and tools specifically focused on demonstrably reducing security and privacy vulnerabilities have provided consistent examples of case study evidence testifying to improved security quality. Although we feel the verdict is still out on how effective CMMI/TSP/PSP are in improving security quality in software as compared to SDL, we’d assert that SDL is, at a minimum, a more optimized approach at improving security quality.

There is information about TSP and security (Over 2002), but it lacks specifics and offers no hard data showing software is more secure because of TSP.

Agile Development Methods

Agile development methods (Wikipedia 2006) such as Extreme Programming attempt to reduce the overall risk of a software development project by building software in very rapid iterations, often called timeboxes or sprints. These short turnarounds potentially allow for better customer feedback and interaction, time management, and schedule prediction.

The Microsoft Solutions Framework (MSF) for Agile Software Development (MSF 2006) adds some security checklists and threat modeling, and the latest version of Extreme Programming adds some security best practice, but it’s very shallow and weak, focusing only on some programming practices for security. Having a list of security best practices and secure coding checklists is certainly better than nothing and will reduce the chance that some security bugs enter the design and the code, but it’s not deep enough and it will catch only shallow security bugs. With all that said, there’s no reason why SDL cannot be adopted by Agile methods, and we’ll discuss this in Chapter 18.

Common Criteria

The Common Criteria (CC), also referred to as ISO/IEC 15408 (Common Criteria 2006), is an international standard for computer security to assess the presence and assurance of security features. Its goal is to allow users to define their security requirements, have developers specify the security attributes of their products, and, finally, allow third-party evaluators to determine whether the products meet the stated claims. Common Criteria does not define standards for quality of design or code quality.

CC defines sets of assurance requirements, called Evaluation Assurance Levels (EALs), numbered from one (EAL1) to seven (EAL7). Higher numbers mean more evaluation effort, time, and money. The CC, at EAL4 and below, does not define standards for code or design quality. EAL5 and EAL6 do specify standards for design but not for code. The highest assurance level, EAL7, specifies both. A higher EAL does not necessarily mean that a product is more secure—it just means that the product under evaluation (called the Target of Evaluation, or TOE) has been more extensively analyzed and evaluated.

Important

Important

A higher evaluation level, for example, EAL4 versus EAL3, does not necessarily imply “more secure.”

Many people mistakenly associate CC with quality and therefore assume the software is resilient to attack. This is not true. Indeed, many products with CC certifications have had numerous successful attacks, including the following:

  • Microsoft Windows 2000 (EAL4) (Microsoft 2000)

  • Red Hat Enterprise Linux 4 (EAL3, in evaluation for EAL4) (Red Hat 2005)

  • Oracle9i Release 9.2.0.1.0 (EAL4) (Oracle 2005)

  • Trend Micro InterScan VirusWall (EAL4)

What CC does provide is evidence that security-related features perform as expected. For example, if a product provides an access control mechanism to objects under its control, a CC evaluation would provide assurance that the monitor satisfies the documented claims describing the protections to the protected objects. The monitor might include some implementation security bugs, however, that could lead to a compromised system. No goal within CC ensures that the monitor is free of all implementation security bugs. And that’s a problem because code quality does matter when it comes to the security of a system.

Important

Important

Design specifications miss important security details that appear only in code.

Summary

Present software development methods lack in-depth security awareness, discipline, best practice, and rigor, and this is evidenced by the sheer quantity of security patches issued each year by all software vendors. To remedy this, the industry must change its present engineering methods to build more secure software.

References

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset