138 Core Software Security
prevent tampering or corruption of test data, testing tools, the integrated
test environment, as well as the test plan itself and both the raw and final-
ized test results. It is also important to ensure that each tool set and test
technique is appropriate for the individual software vulnerabilities that
are being tested.
Testing for both functionality and security requires execution of the
code and validation/verification of the results. It is also not always auto-
mated, because human intervention by experienced software security
architects is typically needed. Because of the complexities and inter actions
of software ecosystems such as SaaS or cloud environments, the knowledge
of such experts is required so that a wider range of scenarios can be tested.
As mentioned previously, the test plan lays out what needs to be tested
for functionality, protected for security, and how the application will react
to specific attacks. The test plan is a joint effort by the project manage-
ment, development, and security teams, among others, to specify the
logistics of the test plan, including who will execute the testing and when
testing will begin and end.
The following are common steps that can be used to implement a test
plan regardless of the strategy, framework, or standard being used:
Define test scripts. Scripts are very detailed, logical steps of instruc-
tions that tell a person or tool what to do during the testing. Func-
tional testing scripts are step-by-step instructions that depict a
specific scenario or situation that the use case will encounter as well
as the expected results. Secure testing scripts are scripts created spe-
cifically to test the security of the application. The basis for these
scripts comes from the threat models that were generated during the
design phase. Misuse cases define what needs to be protected (assets)
and what types of attacks can gain access to those assets. Secure test
scripts define the acts of carrying out those attacks.
Define the user community. Defining the user community helps
testers identify acceptable levels of failures and risk.
Identify the showstoppers. Defining the must-haves and the “what-
if-available” scenarios should be in the use case. If not, a revisit to
the requirements might be necessary so that these specifications can
be documented.
Identify internal resources. Internal resources come from the com-
panys organization, including developers, analysts, software tools,
and sometimes project managers.
Design and Development (A3): SDL Activities and Best Practices 139
Identify external resources. External resources are tools or people
who are hired on a temporary basis to come into a project, test the
application, and report findings. External resources are best suited
for security testing because they typically come highly trained in
secure programming techniques and they are far removed from the
code and any internal politics. If external resources are needed, the
test plan needs to answer the following questions: (1) What are they
going to test? (2) To whom will they report? and (3) With whom will
they be working?
2
Assessing the security properties and behaviors of software as it inter-
sects with external entities such as human users, the environment, and
other software and as its own components interact with each other is a
primary objective of security testing. As such, it should verify that soft-
ware exhibits the following properties and behaviors:
Its behavior is predictable and secure.
It exposes no vulnerabilities or weaknesses.
Its error- and exception-handling routines enable it to maintain a
secure state when confronted by attack patterns or intentional faults.
It satisfies all of its specified and implicit nonfunctional security
requirements.
It does not violate any specified and implicit nonfunctional security
requirements.
It does not violate any specified security constraints.
As much of its runtime-interpretable source code and byte code
as possible has been obscured or obfuscated to deter reverse
engineering.
3,4
A security test plan should be included in the overall software test
plan and should define all security-related testing activities, including the
following:
Security test cases or scenarios (based on abuse cases)
Test data, including attack patterns
Test oracle (if one is to be used)
Test tools (white box, black box, static, and dynamic)
Analyses to be performed to interpret, correlate, and synthesize the
results from the various tests and outputs from the various tools.
5,6
140 Core Software Security
Software security testing techniques can be categorized as white box,
gray box, or black box:
White box. Testing from an internal perspective, i.e., with full
knowledge of the software internals; the source code, architecture
and design documents, and configuration files are available for
analysis.
Gray box. Analyzing the source code for the purpose of design-
ing the test cases, but using black-box testing techniques; both the
source code and executable binary are available for analysis.
Black box. Testing the software from an external perspective, i.e.,
with no prior knowledge of the software; only binary executable or
intermediate byte code is available for analysis.
7,8
The commonly used security testing techniques can be categorized
using the above as follows:
Source code analysis (white box). Source-code security analyzers
examine source code to detect and report software weaknesses that
can lead to security vulnerabilities. The principal advantage that
source-code security analyzers have over the other types of static anal-
ysis tools is the availability of the source code. The source code con-
tains more information than code that must be reverse- engineered
from byte code or binary. Therefore, it is easier to discover software
weaknesses that can lead to security vulnerabilities. Additionally, if
the source code is available in its original form, it will be easier to fix
any security vulnerabilities that are found.
Property-based (white box). Property-based testing is a formal
analysis technique developed by the University of California Davis.
Property-based testing validates that the softwares implemented
functionality satisfies its specifications. It does this by examining
security-relevant properties revealed by the source code, such as the
absence of insecure state changes. Then these security-relevant pro-
perties in the code are compared against the softwares specification
to determine if the security assumptions have been met.
Source-code fault injection (white box, gray box). Fault injec-
tion is a technique used to improve code coverage by testing all code
paths, especially error-handling code paths that may not be exercised
Design and Development (A3): SDL Activities and Best Practices 141
during functional testing. In fault injection testing, errors are
injected into the software to simulate unintentional attacks on the
software through its environment, and attacks on the environment
itself. In source-code fault injection, the tester decides when environ-
ment faults should be triggered. The tester then “instruments” the
source code by nonintrusively inserting changes into the program
that reflect the changed environment data that would result from
those faults. The instrumental source code is then compiled and
executed, and the tester observes the ways in which the executing
softwares state changes when the instrumental portions of code are
executed. This allows the tester to observe the secure and nonsecure
state changes in the software resulting from changes in its environ-
ment. The tester can also analyze the ways in which the softwares
state change results from a fault propagating through the source
code. This type of analy sis is typically referred to as fault propagation
analysis, and involves two techniques of source-code fault injection:
extended propagation analysis and interface propagation analysis.
Dynamic code analysis (gray box). Dynamic code analysis exam-
ines the code as it executes in a running application, with the tester
tracing the external interfaces in the source code to the correspond-
ing interactions in the executing code, so that any vulnerabilities or
anomalies that arise in the executing interfaces are simultaneously
located in the source code, where they can be fixed. Unlike static
analysis, dynamic analysis enables the tester to exercise the software in
ways that expose vulnerabilities introduced by interactions with users
and changes in the configuration or behavior of environmental com-
ponents. Because the software is not fully linked and deployed in its
actual target environment, the testing tool essentially simulates these
interactions and their associated inputs and environment conditions.
Binary fault injection (gray box, black box). Binary fault injection
is a runtime analysis technique whereby an executing application is
monitored as faults are injected. By monitoring system call traces, a
tester can identify the names of system calls, the parameters to each
call, and the call’s return code. This allows the tester to discover the
names and types of resources being accessed by the calling software,
how the resources are being used, and the success or failure of each
access attempt. In binary fault analysis, faults are injected into the
environment resources that surround the application.
142 Core Software Security
Fuzz testing (black box). Fuzzing is a technique that is used to
detect faults and security-related bugs in software by providing
random inputs (fuzz) to a program. As opposed to static analysis,
where source code is reviewed line by line for bugs, fuzzing con-
ducts dynamic analysis by generating a variety of valid and invalid
inputs to a program and monitoring the results. In some instances,
the result might be the program crashing.
Binary code analysis (black box). Binary code scanners analyze
machine code to model a language-neutral representation of the
programs behaviors, control and data flows, call trees, and exter-
nal function calls. Such a model may then be traversed by an auto-
mated vulnerability scanner in order to locate vulnerabilities caused
by common coding errors and simple back doors. A source code
emitter can use the model to generate a human-readable source code
repre sentation of the programs behavior, enabling manual code
review for design-level security weaknesses and subtle back doors
that cannot be found by automated scanners.
Byte code analysis (black box). Byte code scanners are used just
like source-code security analyzers, but they detect vulnerabilities
in the byte code. For example, the Java language is compiled into
a platform-independent byte code format that is executed in the
runtime environment (Java Virtual Machine). Much of the infor-
mation contained in the original Java source code is preserved in
the compiled byte code, thus making de-compilation possible. Byte
code scanners can be used in cases where the source code is not
available for the software—for example, to evaluate the impact a
third-party software component will have on the security posture of
an application.
Black box debugging (black box). Debuggers for low-level pro-
gramming languages such as C or ASM are software tools that
enable the tester to monitor the execution of a program, start and
stop a program, set breakpoints, and modify values. Debuggers are
typically used to debug an application when the source code or the
compiler symbols are available. The source-code and compiler sym-
bols allow information, the values of internal variable, to be tracked
to discover some aspect of internal program behavior. However,
sometimes only the binary is available, and the binary was com-
piled from code with no compiler symbols or debug flags set. This
is typical in commercial software, legacy software, and software that
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset