176 Core Software Security
9. Allows for analysis of applications without access to the actual code
10. Identifies vulnerabilities that might have been false negatives in the
static code analysis
11. Permits validation of static code analysis findings
12. Can be conducted on any application
9,10
Limitations of Dynamic Code Analysis
1. Automated tools provide a false sense of security that everything is
being addressed.
2. Automated tools produce false positives and false negatives.
3. Automated tools are only as good as the rules they are using to scan
with.
4. As for static analysis, there are not enough trained personnel to thor-
oughly conduct dynamic code analysis.
5. It is more difficult to trace the vulnerability back to the exact loca-
tion in the code, taking longer to fix the problem.
11,12
If you have no access to source or binaries, are not a software developer,
and dont understand software builds, or you are performing a “pen test”
or other test of an operational environment, you will likely choose to use
a dynamic tool; otherwise, you will likely use a static analysis tool. Ideally,
you should use both when possible.
Advantages of Fuzz Testing
1. The great advantage of fuzz testing is that the test design is extremely
simple, and free of preconceptions about system behavior.
2. The systematical/random approach allows this method to find bugs
that would often be missed by human eyes. Plus, when the tested
system is totally closed (e.g., a SIP phone), fuzzing is one of the only
means of reviewing its quality.
3. Bugs found using fuzz testing are frequently severe, exploitable bugs
that could be used by a real attacker. This has become even truer
as fuzz testing has become more widely known, because the same
techniques and tools are now used by attackers to exploit deployed
software. This is a major advantage over binary or source auditing,
or even fuzzings close cousin, fault injection, which often relies on
artificial fault conditions that are difficult or impossible to exploit.
Design and Development (A4): SDL Activities and Best Practices 177
Limitations of Fuzz Testing
1. Fuzzers usually tend to find simple bugs; plus, the more a fuzzer is
protocol-aware, the fewer weird errors it will find. This is why the
exhaustive/random approach is still popular.
2. Another problem is that when you do some black-box testing, you
usually attack a closed system, which increases the difficulty of eval-
uating the danger/impact of the found vulnerability (no debugging
possibilities).
3. The main problem with fuzzing to find program faults is that it gen-
erally finds only very simple faults. The problem itself is exponen-
tial, and every fuzzer takes shortcuts to find something interesting in
a timeframe that a human cares about. A primitive fuzzer may have
poor code coverage; for example, if the input includes a checksum
which is not properly updated to match other random changes, only
the checksum validation code will be verified. Code coverage tools
are often used to estimate how “well” a fuzzer works, but these are
only guidelines to fuzzer quality. Every fuzzer can be expected to
find a different set of bugs.
13,14
Advantages of Manual Source Code Review
1. Requires no supporting technology
2. Can be applied to a variety of situations
3. Flexible
4. Promotes teamwork
5. Early in the SDLC
Limitations of Manual Source Code Review
1. Can be time-consuming
2. Supporting material not always available
3. Requires significant human thought and skill to be effective
15
6.4.1 Static Analysis
Static program analysis is the analysis of computer software that is per-
formed without actually executing programs. It is used predominantly to
perform analysis on a version of the source code; it is also performed on
178 Core Software Security
object code. In contrast, dynamic analysis is performed by actually execut-
ing the software programs. Static analysis is performed by an automated
software tool and should not be confused with human analysis or soft-
ware security architectural reviews, which involve manual human code
reviews, including program understanding and comprehension. When
static analysis tools are used properly, they have a distinct advantage over
human static analysis in that the analysis can be performed much more
frequently and with security knowledge superior to that of many software
developers. It thus allows for expert software security architects or engi-
neers to be brought in only when absolutely necessary.
Static analysis (see Figure 6.3) is also known as static application secu-
rity testing (SAST). It identifies vulnerabilities during the development
or quality assurance (QA) phase of a project. SAST provides line-of-code-
level detection that enables development teams to remediate vulnerabili-
ties quickly.
The use of static analysis tools and your choice of an appropriate vendor
for your environment is another technology factor that is key to success.
Any technology that beneficially automates any portion of the software
development process should be welcome, but this software has become
Input
(Source Code)
Test ing
(Static Analysis)
Output
(Potential Defects
Identified)
Has access to the
actual instructions
the software will be
i
Defects identified
include
XSS, SQL injection,
ifi
execut
i
ng.
Can provide exact
location of code
where problem
id
spec
ifi
c
problems/errors with
applications,
configuration
ik
dh
res
id
es.
Can be executed by
software developers
before checking in
d
m
i
sta
k
es, an
d
patc
h
errors.
Limited scope of
what can be found
Id tifi
co
d
e.
Id
en
tifi
es.
vulnerabilities in
operational
environment or
ti
run
ti
me.
Figure 6.3 Static analysis flow diagram.
Design and Development (A4): SDL Activities and Best Practices 179
shelfware” in many organizations because the right people and/or the
right process was not used in selecting the tool or tools. Not all tools are
created equal in this space: Some are better at some languages than others,
whereas others have great front-end GRC (governance, risk management,
and compliance) and metric analysis capabilities. In some cases you may
have to use up to three different tools to be effective. Some of the popular
SAST vendor products are Coverity,
16
HP Fortify Static Code Analyzer,
17
IBM Security AppScan Source,
18
klocwork,
19
Parasoft,
20
and Veracode.
21
One of the challenges in using a static analysis tool is that false posi-
tives may be reported when analyzing an application that interacts with
closed-source components or external systems, because without the source
code it is impossible to trace the flow of data in the external system and
hence ensure the integrity and security of the data. The use of static code
analysis tools can also result in false negative results, when vulnerabilities
exist but the tool does not report them. This might occur if a new vulner-
ability is discovered in an external component or if the analysis tool has
no knowledge of the runtime environment and whether it is configured
securely. A static code analysis tool will often produce false positive results
where the tool reports a possible vulnerability that in fact is not. This
often occurs because the tool cannot be sure of the integrity and security
of data as it flows through the application from input to output.
22
Michael Howard, in his Security & Privacy 2006 IEEE article titled “A
Process for Performing Security Code Reviews,
23
proposes the following
heuristic as an aid to determining code review priority. The heuristic can
be used as a guide for prioritizing static, dynamic, fuzzing, and manual
code reviews.
Old code: Older code may have more vulnerabilities than new code,
because newer code often reflects a better understanding of security
issues. All “legacy” code should be reviewed in depth.
Code that runs by default: Attackers often go after installed code
that runs by default. Such code should be reviewed earlier and more
deeply than code that does not execute by default. Code running by
default increases an applications attack surface.
Code that runs in elevated context: Code that runs in elevated
identities, e.g., root in *nix, for example, also requires earlier and
deeper review, because code identity is another component of the
attack surface.
180 Core Software Security
Anonymously accessible code: Code that anonymous users can
access should be reviewed in greater depth than code that only valid
users and administrators can access.
Code listening on a globally accessible network interface: Code
that listens by default on a network, especially uncontrolled net-
works such as the Internet, is open to substantial risk and must be
reviewed in depth for security vulnerabilities.
Code written in C/C++/assembly language: Because these lan-
guages have direct access to memory, buffer-manipulation vulner-
abilities within the code can lead to buffer overflows, which often
lead to malicious code execution. Code written in these languages
should be analyzed in depth for buffer overflow vulnerabilities.
Code with a history of vulnerabilities: Code that has shown a
number of security vulnerabilities in the past should be suspect,
unless it can be demonstrated that those vulnerabilities have been
effectively removed.
Code that handles sensitive data: Code that handles sensitive data
should be analyzed to ensure that weaknesses in the code do not
disclose such data to untrusted users.
Complex code: Complex code has a higher bug probability, is more
difficult to understand, and may be likely to have more security
vulnerabilities.
Code that changes frequently: Frequently changing code often
results in new bugs being introduced. Not all of these bugs will be
security vulnerabilities, but compared with a stable set of code that is
updated only infrequently, code that is less stable will probably have
more vulnerabilities.
In Michael Howard’s 2004 Microsoft article titled “Mitigate Security
Risks by Minimizing the Code You Expose to Untrusted Users,
24
he also
suggests a notional three-phase code analysis process that optimizes the
use of static analysis tools:
Phase 1: Run all available code analysis tools.
Multiple tools should be used to offset tool biases and minimize false
positives and false negatives.
Analysts should pay attention to every warning or error.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset