Chapter 11. Stage 6: Secure Coding Policies

In this chapter:

As we mentioned in Chapter 7, the software industry is replete with security software coding best practices—of which very few are followed. The Security Development Lifecycle (SDL) mandates specific coding practices and backs up many of the practices with tests to verify that the policies are adhered to. This chapter outlines the high-level policy and best practices for secure coding. The chapter is purposefully high level because the low-level specifics are covered in Chapter 19, Chapter 20, and Chapter 21.

The following coding best practices must be adhered to for new code and actively analyzed for legacy code:

Let’s look at each of these best practices in detail.

Use the Latest Compiler and Supporting Tool Versions

Ultimately, code written by a developer is compiled to a format that is executed by the computer, and the generated code can include defenses added by the compiler. We’ll cover this process in more detail in the next section. You should also define which compiler and tool flags you’ll use. These include optimization flags, linker options, and so on. For example, it is advised that for new code, you compile with the highest possible warning level (/W4 in Microsoft Visual C++, -Wall in GNU C Compiler [GCC], and -w in Borland C++) and compile “cleanly,” with no warnings or errors. You must compile cleanly with -W3 if you are using Visual C++.

Use Defenses Added by the Compiler

The newer Microsoft compilers add defenses to compiled code. This defensive code is added automatically by the compiler, not by the developer. The major defensive options are the following:

  • Buffer security check: /GS

  • Safe exception handling: /SAFESEH

  • Compatibility with Data Execution Prevention: /NXCOMPAT

Buffer Security Check: /GS

The /GS flag is a great example of defensive code added by the compiler—the compiler injects code into the application to help detect some kinds of buffer overruns at run time. The latest Microsoft implementation of this defense in Microsoft Visual Studio 2005—it was first available in Visual Studio .NET 2002—performs the following steps when compiling native Win32 C/C++ code:

  • A random “cookie” is placed on the stack before the return address. The cookie value is checked before the function returns to the caller. If the cookie has changed, the application aborts.

  • The compiler rearranges the stack frame so that stack-based buffers are placed in higher memory addresses than other potentially attackable stack-based variables such as function pointers. This process reduces the chance that these other constructs will be overwritten by a buffer overrun.

  • Code is added to protect against vulnerable parameters passed into a function. A vulnerable parameter is a pointer, C++ reference, or C structure that contains a pointer, string buffer, or C++ reference.

Best Practices

Best Practices

You must compile all C/C++ code with /GS.

Safe Exception Handling: /SAFESEH

The /SAFESEH linker option adds only safe exceptions to the executable image. It does this by adding extra exception-handler information that is verified by the operating system at run time to make sure the code is calling a valid exception handler and not a hijacked (overwritten) exception handler.

Best Practices

Best Practices

You must link your code with /SAFESEH.

Compatibility with Data Execution Prevention: /NXCOMPAT

The /NXCOMPAT linker option indicates that the executable file was tested to be compatible with the Data Execution Protection (DEP) feature in Microsoft Windows (Microsoft 2005).

Best Practices

Best Practices

You must test your application on a computer that uses a CPU that supports DEP, and you must link your code with /NXCOMPAT.

The Microsoft Interface Definition Language (MIDL) compiler, used for building remote procedure call (RPC) and Component Object Model (COM) code, also adds stricter argument checking to the compiled code when you use the /robust switch.

As you can see, the extra defenses are cheap because the compiler automatically adds them. Also note that the execution time and code size overhead is tiny. In our analyses, the potential code size or performance degradation is balanced out by better compiler optimizations.

Best Practices

Best Practices

If your compiler does not add extra defenses to the code, you should consider upgrading the compiler to one that does. This is especially true for C/C++ compilers.

Important

Important

Defenses added by a compiler do not fix security bugs; they are added purely as a speed bump to make attackers’ work harder. Defenses are no replacement for good-quality code.

Use Source-Code Analysis Tools

You must understand that, by themselves, source-code analysis tools do not make software secure. Analysis tools are incredibly useful, but they are no replacement for human beings performing manual code reviews. No tool will replace humans. Make no mistake, we are big fans of source-code analysis tools, but people who use these tools can fall into the traps explained in the following section.

Source-Code Analysis Tool Traps

People fall prey to the first source-code analysis tool trap when they think of source tools as a “silver bullet.” There is no such thing as a secure-code silver bullet; you have to do many things to make code more secure, and tools are just one part of the mix. Thinking that you can run tools to find all bugs of a certain type is a false and dangerous premise.

The next trap is mistaking false positives (also called noise) for real bugs. For example, some common tools report the following code excerpt as defective because it uses the “dangerous function” strcpy:

void function(char *sz) {
     char buff[32];
     strcpy(buff,sz);
}

void main() {
     function("Hello, World!");
}

This code section is not defective in any way because the source buffer is a constant and is not controlled by an attacker. Another common tool, ITS4 (Cigital 2000; Azario 2002), reports the following:

C:its4>its4 test.cpp
test.cpp:5:(Very Risky) strcpy
This function is high risk for buffer overflows
Use strncpy instead.

Too many false positives such as this frustrate developers because they must spend a lot of time chasing down non-bugs. The net effect of too many false positives is that developers eventually stop using the tool at all.

The next issue is that many tools miss real bugs. To reduce the amount of noise created by a tool, developers of source-code analysis tools add heuristics to determine bug probability. The problem with this practice is that the tool might miss real but subtle code bugs.

Next, source-code analysis tools tend to focus on a subset of programming languages. For example, the Microsoft PREfast technology in Visual Studio 2005 analyzes C and C++ code only, and Watchfire’s AppScan is Web specific. So if your solution uses multiple languages, you may have to invest in multiple source-code analysis tools.

The final issue is that most tools find only source-code bugs, not design errors. For example, Coverity ran its source-code analysis tool on the MySQL database and claimed to have found only 97 bugs (Lemos 2005). Yet many of the security bugs in MySQL are design or installation issues, such as “MySQL ALTER TABLE/RENAME Forces Old Permission Checks” (OSVDB 2004).

Of course, source-code analysis tools do have many benefits when used correctly. Let’s look at some.

Benefits of Source-Code Analysis Tools

Source-code analysis tools offer two major benefits: first, they help scale the code review process, and second, tools can help enforce secure-coding policies. At Microsoft, when we find an “interesting” bug class, we create a tool or add capabilities to an existing tool to help find the bug. Then we use the tools to query an entire code base rapidly. Make no mistake—at this point, we don’t think the interesting bug type has been removed from the code; this is just the start. If the tools find a large number of potential bugs in the code, we update educational programs and, in some cases the SDL process, to provide prescriptive remedies for the bug type.

Take as an example the coding bug in Windows RPC/DCOM that the Blaster worm took advantage of (Microsoft 2003). The defective code looks like this:

HRESULT GetMachineName(WCHAR *pwszPath) {
    WCHAR wszMachineName[N + 1]);
    ...
    LPWSTR pwszServerName = wszMachineName;
    while (*pwszPath != L'')
        *pwszServerName++ = *pwszPath++;
    ...

In this code, the attacker controls the pwszPath argument so that she can overflow the wszMachineName buffer. This code bug was not picked up by any tools available within Microsoft, so a Perl script was rapidly written to search for the core construct within the RPC runtime:

use strict;
use File::Find;

my $RECURSE = 1;
my $VERBOSE = 0;

###################################################################
foreach(@ARGV) {
  next if /^-./;
  if ($RECURSE) {
      finddepth(&processFile,$_);
  } else {
      find(&processFile,$_);
  }
}

###################################################################
sub processFile {
  my $FILE;
  my $filename = $_;

  if (!$RECURSE && ($File::Find::topdir ne $File::Find::dir)) {
    # Recurse is not set, and we are in a different directory
    $File::Find::prune = 1;
    return;
  }

  # only accept .cxx, .cpp, .c and .cc and header extensions
  return if (!(/.cpp$|.c$|.cxx$|.cc$|.hpp$|.h$|.hxx$/i));

  print "Checking $filename
" if $VERBOSE;  warn "$!
" unless open FILE, "<" . $filename;

  # reset line number in case the same file is parsed twice (duh!)
  $. = 0;

  while (<FILE>) {
    # Find the core coding construct (++p = ++q or p++ = q++)
    if (/*++w+s*=s**++w+/ ||
        /*w+++s*=s**w+++/) {

       s/^s+//;
       s/s+$//;

       print $File::Find::name . " (" . $. . ")
	" . $_ . "
";
    }
  }
}

Because of this bug, education was also updated to include the defective code and direction on how to fix the code. Microsoft Research started working on a less noisy source-code analysis tool, which is now part of the normal round of tools run on code as it’s written. As you can see from this example, Microsoft created a “quick and dirty” tool to find potentially defective code, but the purpose was to understand how many problematic coding constructs existed in the Windows code base so that we could determine how bad the problem might be. With this information in hand, we could move resources around to get more developers hand-reviewing code.

The second use for source-code analysis tools is to enforce coding policy. Good tools are the best way to enforce policies such as a ban on certain functions or constructs. We do this at Microsoft at code check-in time. A battery of tools runs just before a developer’s check-in, and any bugs found by the tools are flagged and triaged for repairs. Again, these tools are no replacement for good developers; they simply augment the code-review and code-quality process and act as a backstop, just in case a developer makes a mistake.

The two major source-code analysis tools from Microsoft are PREfast and FxCop. In Chapter 21, you can find a list of the warnings from these tools that must be triaged and fixed.

Best Practices

Best Practices

You should augment your software development process with good source-code analysis tools. Used alone, source-code analysis tools will not solve your source-code security issues. They are a defensive backstop.

Do Not Use Banned Functions

The subject of banned functions is covered in great detail in Chapter 19. For our purposes here, all you need to know is that there is a population of functions that, although fine 20 years ago, is simply not secure enough in light of today’s threats. You can find banned functions by using header files, code-scanning tools, or updated compilers. An example header file named banned.h is included on the disc accompanying this book. The latest version of the Visual C/C++ compiler from Microsoft deprecates many functions, and the developer is warned during code compilation.

Reduce Potentially Exploitable Coding Constructs or Designs

This section may seem like a broader version of the two prior sections, but it’s quite different. Some commonly used coding constructs or designs are not secure. For example, in Windows, it’s possible to create an object with a NULL DACL—in other words, an object with an empty access control list (ACL), which means the object has no protection. Obviously, this is insecure. Tools such as Application Verifier—discussed in Chapter 12—can detect these weak ACLs at run time, and the PREfast source-code analysis technology built into Visual Studio 2005 will also detect this at compile time. Therefore, code such as this:

SetSecurityDescriptorDacl(&sd, TRUE, NULL, FALSE);

will result in this compiler warning:

c:Code	estDACL	estDACL.cpp(21) : warning C6248: Setting a SECURITY_DESCRIPTOR's DACL to
NULL will result in an unprotected object

The SetSecurityDescriptorDacl function is not insecure, but it can be called in a way that would render a system insecure.

Other examples of potentially vulnerable constructs in Windows include shared writable segments and executable pages. We’re not going to explain these in detail because they are discussed in other texts (Howard and LeBlanc 2003). On *nix systems, examples of bad design constructs include symbolic-link errors (Wheeler 2002; OSVDB 2006).

In C# code, you should consider wrapping networking-facing code that performs arithmetic and array bounds lookup with the checked operator:

UInt32 i = GetFromNetwork();
try {
    checked {
        UInt32 offset = i * 2;
        // Do array lookup
    }
}
catch (OverflowException ex) {
    // Handle exception
}

Failing this, you could perform the integer arithmetic overflow check to avoid the overhead of a potential exception (Howard, LeBlanc, and Viega 2005).

Use a Secure Coding Checklist

Create a secure coding checklist that describes all the minimal requirements for any code that is checked in to the software product. It’s useful to have a checklist to follow to make sure the code meets a minimum-security bar. Although checklists are useful, you can’t write secure code simply by following a checklist. But doing so is a reasonable start, and it’s useful for new employees.

Summary

In recent years, a great deal of attention has been paid to secure-coding best practices, but although much material is available, alarmingly few developers adhere to, or are even aware of, such best practices. The SDL mandates that coding best practices be adhered to. These are taught during standard yearly education for all developers, and they are enforced through the use of source-code analysis tools. Such tools are very useful and can help find security bugs, but they are not a silver bullet; do not rely on any source-code analysis tool to replace a developer’s skills. Also, the SDL has banned certain function calls and cryptographic algorithms that have led to security vulnerabilities in the past. You must not simply ban dangerous functionality—you must also provide prescriptive replacements. In our experience, developers have no problem adhering to security requirements as long as you give them good guidance and tools verify adherence.

References

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset