Software ranging from customer-facing applications and services to smaller programs, down to the smallest custom scripts written to support business needs, is everywhere in our organizations. That software runs on hardware that brings its own security requirements and risks. Ensuring that that software and hardware is secure is an important part of a security professional's skillset.
The process of designing, creating, supporting, and maintaining that software is known as the software development life cycle (SDLC). As a security practitioner, you need to understand the SDLC and its security implications to ensure that the software your organization uses is well written and secure throughout its lifespan.
In this chapter you will learn about major SDLC models and the reasons for choosing them, as well as the architectures they are implemented to support. You will learn about secure coding best practices and tools, and software assessment methods and techniques. As part of this, you will see how software is tested and reviewed and how these processes fit into the SDLC. You will learn about code review and inspection methodologies like pair programming and over-the-shoulder code reviews as well as Fagan inspection that can help ensure that the code your organization puts into production is ready to face both users and attackers.
In addition, you will learn about hardware assurance best practices ranging from trusted foundries and supply chain security to secure processing and specific hardware security concepts and techniques.
Building, deploying, and maintaining software requires security involvement throughout the software's life cycle. The CySA+ exam objectives focus on the software development life cycle, software assessment methods and tools, coding practices, platforms, and architectures.
The software development life cycle (SDLC) describes the steps in a model for software development throughout its life. As shown in Figure 9.1, it maps software creation from an idea to requirements gathering and analysis to design, coding, testing, and rollout. Once software is in production, it also includes user training, maintenance, and decommissioning at the end of the software package's useful life.
Software development does not always follow a formal model, but most enterprise development for major applications does follow most, if not all, of these phases. In some cases, developers may even use elements of an SDLC model without realizing it!
The SDLC is useful for organizations and for developers because it provides a consistent framework to structure workflow and to provide planning for the development process. Despite these advantages, simply picking an SDLC model to implement may not always be the best choice. Each SDLC model has certain types of work and projects that it fits better than others, making choosing an SDLC model that fits the work an important part of the process.
Regardless of which SDLC or process is chosen by your organization, a few phases appear in most SDLC models:
The order of the phases may vary, with some progressing in a simple linear fashion and others taking an iterative or parallel approach. You will still see some form of each of these phases in successful software life cycles.
The SDLC can be approached in many ways, and over time a number of formal models have been created to help provide a common framework for development. While formal SDLC models can be very detailed, with specific practices, procedures, and documentation, many organizations choose the elements of one or more models that best fit their organizational style, workflow, and requirements.
The Waterfall methodology is a sequential model in which each phase is followed by the next phase. Phases do not overlap, and each logically leads to the next. A typical six-phase Waterfall process is shown in Figure 9.2. In Phase 1, requirements are gathered and documented. Phase 2 involves analysis intended to build business rules and models. In Phase 3, a software architecture is designed, and coding and integration of the software occurs in Phase 4. Once the software is complete, Phase 5 occurs, with testing and debugging being completed in this phase. Finally the software enters an operational phase, with support, maintenance, and other operational activities happening on an ongoing basis.
Waterfall has been replaced in many organizations because it is seen as relatively inflexible, but it remains in use for complex systems. Since Waterfall is not highly responsive to changes and does not account for internal iterative work, it is typically recommended for development efforts that involve a fixed scope and a known timeframe for delivery and that are using a stable, well-understood technology platform.
The Spiral model uses the linear development concepts from the Waterfall model and adds an iterative process that revisits four phases multiple times during the development life cycle to gather more detailed requirements, design functionality guided by the requirements, and build based on the design. In addition, the Spiral model puts significant emphasis on risk assessment as part of the SDLC, reviewing risks multiple times during the development process.
The Spiral model shown in Figure 9.3 uses four phases, which it repeatedly visits throughout the development life cycle:
The Spiral model provides greater flexibility to handle changes in requirements as well as external influences such as availability of customer feedback and development staff. It also allows the software development life cycle to start earlier in the process than Waterfall does. Because Spiral revisits its process, it is possible for this model to result in rework or to identify design requirements later in the process that require a significant design change due to more detailed requirements coming to light.
Agile software development is an iterative and incremental process, rather than the linear processes that Waterfall and Spiral use. Agile is rooted in the Manifesto for Agile Software Development, a document that has four basic premises:
If you are used to a Waterfall or Spiral development process, Agile is a significant departure from the planning, design, and documentation-centric approaches that Agile's predecessors use. Agile methods tend to break work up into smaller units, allowing work to be done more quickly and with less up-front planning. It focuses on adapting to needs, rather than predicting them, with major milestones identified early in the process but subject to change as the project continues to develop.
Work is typically broken up into short working sessions, called sprints, that can last days to a few weeks. Figure 9.4 shows a simplified view of an Agile project methodology with multiple sprints conducted. When the developers and customer agree that the task is done or when the time allocated for the sprints is complete, the development effort is completed.
The Agile methodology is based on 12 principles:
These principles drive an SDLC process that is less formally structured than Spiral or Waterfall but that has many opportunities for customer feedback and revision. It can react more nimbly to problems and will typically allow faster customer feedback—an advantage when security issues are discovered.
Agile development uses a number of specialized terms:
The RAD (Rapid Application Development) model is an iterative process that relies on building prototypes. Unlike many other methods, there is no planning phase; instead, planning is done as the software is written. RAD relies on functional components of the code being developed in parallel and then integrated to produce the finished product. Much like Agile, RAD can provide a highly responsive development environment.
RAD involves five phases, as shown in Figure 9.5.
While we have discussed some of the most common models for software development, others exist, including the following:
New SDLC models spread quickly and often influence existing models with new ideas and workflows. Understanding the benefits and drawbacks of each SDLC model can help you provide input at the right times to ensure that the software that is written meets the security requirements of your organization.
DevOps combines software development and IT operations with the goal of optimizing the SDLC. This is done by using collections of tools called toolchains to improve the coding, building and test, packaging, release, configuration and configuration management, and monitoring elements of a software development life cycle.
Of course, DevOps should have security baked into it as well. The term DevSecOps describes security as part of the DevOps model. In this model, security is a shared responsibility that is part of the entire development and operations cycle. That means integrating security into the design, development, testing, and operational work done to produce applications and services.
The role of security practitioners in a DevSecOps model includes threat analysis and communications, planning, testing, providing feedback, and of course ongoing improvement and awareness responsibilities. To do this requires a strong understanding of the organization's risk tolerance, as well as awareness of what the others involved in the DevSecOps environment are doing and when they are doing it. DevOps and DevSecOps are often combined with continuous integration and continuous deployment methodologies where they can rely on automated security testing, and integrated security tooling including scanning, updates, and configuration management tools to help ensure security.
Continuous integration (CI) is a development practice that checks code into a shared repository on a consistent ongoing basis. In continuous integration environments, this can range from a few times a day to a very frequent process of check-ins and automated builds.
Since continuous integration relies on an automated build process, it also requires automated testing. It is also often paired with continuous deployment (CD) (sometimes called continuous delivery), which rolls out tested changes into production automatically as soon as they have been tested.
Figure 9.6 shows a view of the continuous integration/continuous deployment pipeline.
Using continuous integration and continuous deployment methods requires building automated security testing into the pipeline testing process. It can result in new vulnerabilities being deployed into production, and could allow an untrusted or rogue developer to insert flaws into code that is deployed, then remove the code as part of a deployment in the next cycle. This means that logging, reporting, and monitoring must all be designed to fit the CI/CD process.
Participating in the SDLC as a security professional provides significant opportunities to improve the security of applications. The first chance to help with software security is in the requirements gathering and design phases when security can be built in as part of the requirements and then designed in based on those requirements. Later, during the development process, secure coding techniques, code review, and testing can improve the quality and security of the code that is developed.
During the testing phase, fully integrated software can be tested using tools like web application security scanners or penetration testing techniques. This also provides the foundation for ongoing security operations by building the baseline for future security scans and regression testing during patching and updates. Throughout these steps, it helps to understand the common security issues that developers face, create, and discover.
A multitude of development styles, languages, frameworks, and other variables may be involved in the creation of an application, but many of the same security issues are the same regardless of which you use. In fact, despite many development frameworks and languages providing security features, the same security problems continue to appear in applications all the time! Fortunately, a number of common best practices are available that you can use to help ensure software security for your organization.
There are many software flaws that you may encounter as a security practitioner, but the CySA+ exam focuses on some of the most common, such as the following:
strcpy
, which don't have critical security features built in, can result in code that is easier for attackers to target. In fact, strcpy is the only specific function that the CySA+ objectives call out, likely because of how commonly it is used for buffer overflow attacks in applications written in C. strcpy
allows data to be copied without caring whether the source is bigger than the destination. If this occurs, attackers can place arbitrary data in memory locations past the original destination, possibly allowing a buffer overflow attack to succeed.There are many factors that need to be taken into account when looking at software development security. The language that is used, the modules and frameworks that are part of the development process, how testing and validation are done, and of course, the underlying platform that the code will run on are all important. The platform helps to determine what tools you can use, which security capabilities are built in, and many other conditions that impact the software development process.
Mobile platforms have their own operating systems, and their own platform security controls. They also have their own security tools like the iOS Keychain and Face ID. They store data in ways that can be specific to the device, with Android devices often offering both on-board storage and storage via microSD cards, which can make tampering easier for attackers. Most of the common issues for mobile platforms, however, are similar to those found in other platforms. As of this writing, OWASP's most recent mobile vulnerability list includes insecure communication, insecure authentication and authorization, insufficient cryptography, code quality, and reverse engineering—all issues with other platforms.
Embedded systems, or computer systems that are part of a larger system with a small number of dedicated functions, and system-on-chip (SOC) systems, which embed a complete computer in a chip, can provide additional security because they're not as accessible, but that often comes with less frequent updates or an inability to update them easily. Both embedded systems and SOC devices may have hardware, firmware, and software vulnerabilities, and their pervasive nature means that broadly deployed systems are attractive targets for attackers who find them built into the Internet of Things (IoT) or the control planes of utilities, factories, and other infrastructure or critical targets.
One of the most common platforms for applications is the client-server application model. In this model, clients (web browsers, applications, or other clients) communicate with one or more servers that provide information to them. Web applications work this way, and security practitioners need to understand that attacks may be conducted against the clients, against the network, against the traffic sent between the client and server, and against the server itself. Thus, the attack surface of a client-server application is broad, and appropriate security measures must be implemented for each component.
The final platform that the CySA+ 2.2 exam objectives consider is firmware. Firmware is the embedded software used by a computer or hardware device. Firmware flaws can be hard to fix, since not all devices are designed to update their firmware. Attackers who want to target firmware will often seek to acquire a copy of the firmware, either by directly connecting to the device and downloading it or by acquiring the firmware itself from a download site or other means. After that, standard reverse engineering and other software exploitation techniques can be applied to it to identify flaws that may be worth exploiting.
The best practices for producing secure code will vary depending on the application, its infrastructure and backend design, and what framework or language it is written in. Despite that, many of the same development, implementation, and design best practices apply to most applications. These include the following:
One of the best resources for secure coding practices is the Open Web Application Security Project (OWASP). OWASP is the home of a broad community of developers and security practitioners, and it hosts many community-developed standards, guides, and best practice documents, as well as a multitude of open source tools. OWASP provides a regularly updated list of proactive controls that is useful to review not only as a set of useful best practices, but also as a way to see how web application security threats change from year to year.
Here are OWASP's current top proactive controls (updated in 2018) with brief descriptions:
You can find OWASP's Proactive Controls list at www.owasp.org/index.php/OWASP_Proactive_Controls
, and a useful quick reference guide to secure coding practices is available at www.owasp.org/index.php/OWASP_Secure_Coding_Practices_-_Quick_Reference_Guide
.
In addition to the resources provided by OWASP, SANS maintains a list of the top 25 software errors in three categories:
Top listings of common controls and problems are useful as a reminder, but understanding the set of controls that are appropriate to your environment is critical. A thorough assessment with developers and other experts who understand not only the business requirements and process but also the development language or framework will help keep your organization secure.
Application programming interfaces (APIs) are interfaces between clients and servers or applications and operating systems that define how the client should ask for information from the server and how the server will respond. This definition means that programs written in any language can implement the API and make requests.
APIs are tremendously useful for building interfaces between systems, but they can also be a point of vulnerability if they are not properly secured. API security relies on authentication, authorization, proper data scoping to ensure that too much data isn't released, rate limiting, input filtering, and appropriate monitoring and logging to remain secure. Of course, securing the underlying systems, configuring the API endpoint server or service, and providing normal network layer security to protect the service are also important.
Many security tools and servers provide APIs, and security professionals are often asked to write scripts or programs that can access an API to pull data. In fact, the TAXII and STIX protocol and language we described in Chapter 2, “Using Threat Intelligence,” are a great example of an interface that might be accessed via an API call.
Service-oriented architecture (SOA) is a software design that provides services to components of a system or service via communication protocols on a network. The intent of a SOA design is to allow loosely coupled components to communicate in a standardized way, allowing them to consume and provide data to other components. Developers abstract the service, hiding the complexity of the service and its inner workings, instead providing ways to access the data. Typical components of a service-oriented architecture include service providers, service registries or service brokers that provide listings and information about service providers, and consumers who access the services.
SOAP (Simple Object Access Protocol) is an XML-based messaging protocol that was frequently used for web services. SOAP defines how messages should be formatted and exchanged, how transport of the messages occurs, as well as models for processing them. Like other XML-based protocols, SOAP is extensible, so it can be customized as needed.
RESTful HTTP (REST stands for Representational State Transfer) has largely supplanted SOAP in many use cases because of its greater flexibility. REST APIs follow six architectural constraints: they use a uniform interface, they separate clients and servers, they are stateless (in other words they don't use server-side sessions), they mark whether server responses are cacheable, they are designed to allow layering of services between clients and servers, and they may include client executable code in their responses.
Both REST and SOAP allow developers to create their own APIs, but unlike SOAP, REST is not a protocol—instead, it defines how a RESTful architecture should be designed and built.
As a security professional, you need to know that public and private APIs exist and may be built using various technologies, frameworks, and protocols, including these. The APIs may themselves be vulnerable, and the underlying services, servers, and protocols may be part of the attack surface you need to assess.
Application testing can be conducted in one of four ways: as a scan using a tool, via an automated vulnerability scanner, through manual penetration testing, or via code review. OWASP's Code Review guide notes that code reviews provide the best insight into all the common issues that applications face: availability, business logic, compliance, privacy, and vulnerabilities. Combining code review with a penetration test based on the code review's output (which then drives further code review, known as a 360 review) can provide even more insight into an application's security.
Software defects can have a significant impact on security, but creating secure software requires more than just security scans and reviewing code when it is complete. Information security needs to be involved at each part of the SDLC process.
Implementing security controls through the software development life cycle can help ensure that the applications that enter production are properly secured and maintained throughout their life cycle. Being fully involved in the SDLC requires security professionals to learn about the tools, techniques, and processes that development teams use, so be ready to learn about how software is created in your organization.
Once the SDLC reaches the development phase, code starts to be generated. That means that the ability to control the version of the software or component that your team is working on, combined with check-in/check-out functionality and revision histories, is a necessary and powerful tool when developing software. Fortunately, version control and source control management tools fill that role.
A strong SDLC requires the ability to determine that the code that is being deployed or tested is the correct version and that fixes that were previously applied have not been dropped from the release that is under development. Popular version control systems include Git, Subversion, and CVS, but there are dozens of different tools in use.
Reviewing the code that is written for an application provides a number of advantages. It helps to share knowledge of the code, and the experience gained by writing is better than simple documentation alone since it provides personal understanding of the code and its functions. It also helps detect problems while enforcing coding best practices and standards by exposing the code to review during its development cycle. Finally, it ensures that multiple members of a team are aware of what the code is supposed to do and how it accomplishes its task.
There are a number of common code review processes, including both formal and Agile processes like pair programming, over-the-shoulder, and Fagan code reviews.
Pair programming is an Agile software development technique that places two developers at one workstation. One developer writes code, while the other developer reviews their code as they write it. This is intended to provide real-time code review, and it ensures that multiple developers are familiar with the code that is written. In most pair programming environments, the developers are expected to change roles frequently, allowing both of them to spend time thinking about the code while at the keyboard and to consider the design and any issues in the code while reviewing it.
Pair programming adds additional cost to development since it requires two full-time developers. At the same time, it provides additional opportunities for review and analysis of the code and directly applies more experience to coding problems, potentially increasing the quality of the code.
Over-the-shoulder code review also relies on a pair of developers, but rather than requiring constant interaction and hand-offs, over-the-shoulder requires the developer who wrote the code to explain the code to the other developer. This allows peer review of code and can also assist developers in understanding how the code works, without the relatively high cost of pair programming.
Pass-around code review, sometimes known as email pass-around code review, is a form of manual peer review done by sending completed code to reviewers who check the code for issues. Pass-around reviews may involve more than one reviewer, allowing reviewers with different expertise and experience to contribute their expertise. Although pass-around reviews allow more flexibility in when they occur than an over-the-shoulder review, they don't provide the same easy opportunity to learn about the code from the developer who wrote it that over-the-shoulder and pair programming offer, making documentation more important.
Tool-assisted code reviews rely on formal or informal software-based tools to conduct code reviews. Tools like Atlassian's Crucible collaborative code review tool, Codacy's static code review tool, and Phabricator's Differential code review tool are all designed to improve the code review process. The wide variety of tools used for code review reflects not only the multitude of software development life cycle options but also how organizations set up their design and review processes.
Table 9.1 compares the four informal code review methods and formal code review. Specific implementations may vary, but these comparisons will generally hold true between each type of code review. In addition, the theory behind each method may not always reflect the reality of how an organization will use it. For example, pair programming is intended to provide the same speed of development as two developers working on their own while increasing the quality of the code. This may be true for experienced programmers who work well together, but lack of training, personality differences, and variation in work styles can make pair programming less effective than expected.
TABLE 9.1 Code review method comparison
Cost | When does review happen | Ability to explain the code | Skill required | |
Pair programming | Medium | Real time | High | Users must know how to pair program |
Over-the-shoulder | Medium | Real time | High | No additional skill |
Pass-around code review | Low/Medium | Asynchronous | Low | No additional skill |
Tool assisted review | Medium | Tool/process dependent | Typically low | Training to use the tool may be required |
Formal code review | High | Asynchronous | Typically low | Code review process training |
When code requires more in-depth review than the relatively lightweight Agile processes like pass-around and over-the-shoulder reviews, formal code review processes are sometimes used. As you might imagine from the name, formal code reviews are an in-depth, often time-consuming process intended to fully review code using a team of experts. The primary form of formal code review is Fagan inspection.
Fagan inspection is a form of structured, formal code review intended to find a variety of problems during the development process. Fagan inspection specifies entry and exit criteria for processes, ensuring that a process is not started before appropriate diligence has been performed, and also making sure that there are known criteria for moving to the next phase.
The Fagan inspection process in Figure 9.7 shows the six typical phases:
No matter how talented the development team for an application is, there will be some form of flaws in the code. Veracode's 2019 metrics for applications based on their testing showed that 83 percent of the 1.4 million applications they scanned had at least one security flaw in the initial scan. That number points to a massive need for software security testing to continue to be better integrated into the software development life cycle.
A broad variety of manual and automatic testing tools and methods are available to security professionals and developers. Fortunately, automated tools have continued to improve, providing an easier way to verify that code is more secure. Over the next few pages, we will review some of the critical software security testing methods and tools.
The source code that is the basis of every application and program can contain a variety of bugs and flaws, from programming and syntax errors to problems with business logic, error handling, and integration with other services and systems. It is important to be able to analyze the code to understand what it does, how it performs that task, and where flaws may occur in the program itself. This is often done via static or dynamic code analysis, along with testing methods like fuzzing, fault injection, mutation testing, and stress testing. Once changes are made to code and it is deployed, it must be regression tested to ensure that the fixes put in place didn't create new security issues.
Static code analysis (sometimes called source code analysis) is conducted by reviewing the code for an application. Since static analysis uses the source code for an application, it can be seen as a type of white-box testing with full visibility to the testers. This can allow testers to find problems that other tests might miss, either because the logic is not exposed to other testing methods or because of internal business logic problems.
Unlike many other methods, static analysis does not run the program; instead, it focuses on understanding how the program is written and what the code is intended to do. Static code analysis can be conducted using automated tools or manually by reviewing the code—a process sometimes called “code understanding.” Automated static code analysis can be very effective at finding known issues, and manual static code analysis helps identify programmer-induced errors.
Dynamic code analysis relies on execution of the code while providing it with input to test the software. Much like static code analysis, dynamic code analysis may be done via automated tools or manually, but there is a strong preference for automated testing due to the volume of tests that need to be conducted in most dynamic code testing processes.
Fuzz testing, or fuzzing, involves sending invalid or random data to an application to test its ability to handle unexpected data. The application is monitored to determine if it crashes, fails, or responds in an incorrect manner. Because of the large amount of data that a fuzz test involves, fuzzing is typically automated, and it is particularly useful for detecting input validation and logic issues as well as memory leaks and error handling. Unfortunately, fuzzing tends to identify only simple problems; it does not account for complex logic or business process issues and may not provide complete code coverage if its progress is not monitored.
Unlike fuzzing, fault injection directly inserts faults into error handling paths, particularly error handling mechanisms that are rarely used or might otherwise be missed during normal testing. Fault injection may be done in one of three ways:
Fault injection is typically done using automated tools due to the potential for human error in the fault injection process.
Mutation testing is related to fuzzing and fault injection, but rather than changing the inputs to the program or introducing faults to it, mutation testing makes small modifications to the program itself. The altered versions, or mutants, are then tested and rejected if they cause failures. The mutations themselves are guided by rules that are intended to create common errors as well as to replicate the types of errors that developers might introduce during their normal programing process. Much like fault injection, mutation testing helps identify issues with code that is infrequently used, but it can also help identify problems with test data and scripts by finding places where the scripts do not fully test for possible issues.
Performance testing for applications is as important as testing for code flaws. Ensuring that applications and the systems that support them can stand up to the full production load they are anticipated to need is part of a typical SDLC process. When an application is ready to be tested, stress test applications and load testing tools are used to simulate a full application load, and in the case of stress testing, to go beyond any normal level of load to see how the application or system will respond when tested to the breaking point.
Stress testing can also be conducted against individual components of an application to ensure that they are capable of handling load conditions. During integration and component testing, fault injection may also be used to ensure that problems during heavy load are properly handled by the application.
Regression testing focuses on testing to ensure that changes that have been made do not create new issues. From a security perspective, this often comes into play when patches are installed or when new updates are applied to a system or application. Security regression testing is performed to ensure that no new vulnerabilities, misconfigurations, or other issues have been introduced.
Automated testing using tools like web application vulnerability scanners and other vulnerability scanning tools are often used as part of an automated or semiautomated regression testing process. Reports are generated to review the state of the application (and its underlying server and services) before and after changes are made to ensure that it remains secure.
In addition to the many types of security testing, user acceptance testing (UAT) is an important element in the testing cycle. Once all of the functional and security testing is completed for an application or program, users are asked to validate whether it meets the business needs and usability requirements. Since developers rarely know or perform all of the business functions that the applications they write will perform, this stage is particularly important to validate that things work like they should in normal use.
Ideally UAT should have a formal test plan that involves examples of all of the common business processes that the users of the application will perform. This should be paired with acceptance criteria that indicate what requirements must be satisfied to consider the work acceptable and thus ready to move into production.
Many of the applications our organizations use today are web-based applications, and they offer unique opportunities for testing because of the relative standardization of HTML-based web interfaces. In Chapters 3 and 4 we looked at vulnerability scanning tools like Nessus, Nexpose, and OpenVAS, which scan for known vulnerabilities in systems, services, and, to a limited extent, web applications. Dedicated web application vulnerability scanners provide an even broader toolset specifically designed to identify problems with applications and their underlying web servers, databases, and infrastructure.
Dozens of web application vulnerability scanners are available. Some of the most popular are Acunetix WVS, Arachni, Burp Suite, HCL AppScan, Micro Focus's WebInspect, Netsparker, Qualys's Web Application Scanner, and W3AF.
Web application scanners can be directly run against an application and may also be guided through the application to ensure that they find all the components that you want to test. Like traditional vulnerability scanners, web application scanning tools provide a report of the issues they discovered when they are done. Additional details, including where the issue was found and remediation guidance, is also typically available by drilling down on the report item.
In addition to automated web application vulnerability scanners, manual scanning is frequently conducted to identify issues that automated scanners may not. Manual testing may be fully manual, with inputs inserted by hand, but testers typically use tools called interception proxies that allow them to capture communication between a browser and the web server. Once the proxy captures the information, the tester can modify the data that is sent and received.
A web browser plug-in proxy like Tamper Data for Firefox can allow you to modify session values during a live connection, as shown in Figure 9.8. Using an interception proxy to crawl through an application provides insight into both what data the web application uses and how you could attack the application.
There are a number of popular proxy tools ranging from browser-specific plug-ins like Tamper Data and HttpFox to browser-agnostic tools like Fiddler (which runs as a dedicated proxy). In addition, tools like Burp Suite provide a range of capabilities, including application proxies, spiders, web application scanning, and other advanced tools intended to make web application penetration testing easier.
While we often think about software security, the security of the underlying hardware can also be a concern. Checking the individual components built into every device in your organization is likely beyond the capabilities that you will have, but you should know the concepts behind hardware assurance and what to look for if you need to have high assurance levels for your devices and systems.
Modern hardware assurance often begins with the hardware root of trust. The hardware root of trust for a system contains the cryptographic keys that secure the boot process. This means that the system or device inherently trusts the hardware root of trust, and that it needs to be secure! One common implementation of a hardware root of trust is the Trusted Platform Module (TPM) chip built into many computers. TPM chips are frequently used to provide built-in encryption, and they provide three major functions:
While TPM chips are one common solution, others include serial numbers that cannot be modified or cloned, and physically unclonable functions (PUFs), which are unique to the specific hardware device that provides a unique identifier or digital fingerprint for the device.
An additional security feature intended to help prevent boot-level malware is measured boot. Measured boot processes measure each component, starting with the firmware and ending with the boot start drivers. The data gathered is stored in a TPM module, and the logs can be validated remotely to let security administrators know the boot state of the system. This allows comparison against known good states, and administrators can take action if the measured boot shows a difference from the accepted or secure known state.
A related technology is hardware security modules (HSMs). Hardware security modules are typically external devices or plug-in cards used to create, store, and manage digital keys for cryptographic functions and authentication, as well as to offload cryptographic processing. HSMs are often used in high-security environments and are normally certified to meet standards like FIPS 140 or Common Criteria standards.
Other defensive technologies can also help to secure systems. IBM's eFuse technology has a number of uses that can help with tuning performance or responding to system degradation, but it also has some interesting security applications. For example, an eFuse can be set at the chip level to monitor firmware levels. This is implemented in the Nintendo Switch, which uses eFuse checking to validate whether the firmware that is being installed is older than the currently installed firmware, preventing downgrading of firmware. When newer firmware is installed, eFuses are “burned,” indicating the new firmware level that is installed.
Most modern computers use a version of the Unified Extensible Firmware Interface (UEFI). UEFI provides for the ability to secure boot, which will load only drivers and operating system loaders that have been signed using an accepted digital signature. Since these keys have to be loaded into the UEFI firmware, UEFI security has been somewhat contentious, particularly with the open source community. UEFI remains one way to provide additional security if your organization needs to have a greater level of trust in the software a system is loading.
If you rely on firmware to provide security, you also need a method to ensure that the firmware is secure and that updates to the firmware are secure. Trusted firmware updates can help, with validation done using methods like checksum validation, cryptographic signing, and similar techniques. This technique is frequently used to validate updates like those for network devices, motherboards, phones, printers, and other hardware that receives firmware updates.
Securing hardware can start at the supply chain level. The U.S. government started the Trusted Foundry Program to validate microelectronic suppliers throughout the supply chain. The program assesses the integrity and processes of the companies, staff, distribution chain, and other factors involved in the delivery of microelectronics components and devices. This provides a chain of custody for classified and unclassified integrated circuits, and helps to ensure that reasonable threats to the supply chain are prevented such as tampering, reverse engineering, or modification. You can read about the DMEA and its role with trusted foundry accreditation at https://www.dmea.osd.mil/TrustedIC.aspx
The security of hardware is also done at the underlying design level. A number of these concepts are part of the CySA+:
Finally, there are a number of techniques that can help to protect devices. The CySA+ exam outline calls out a few that you may encounter:
The software development life cycle describes the path that software takes from planning and requirements gathering to design, coding, testing, training, and deployment. Once software is operational, it also covers the ongoing maintenance and eventual decommissioning of the software. That means that participating in the SDLC as a security professional can have a significant impact on organizational software security.
There are many SDLC models, including the linear Waterfall method, Spiral's iterative process-based design, and Agile methodologies that focus on sprints with timeboxed working sessions and greater flexibility to meet changing customer needs. Other models include Rapid Application Development's iterative prototype-based cycles, the V model with parallel test cycles for each stage, and the Big Bang model, a model without real planning or process. Each SDLC model offers advantages and disadvantages, meaning that a single model may not fit every project.
Coding for information security requires an understanding of common software coding best practices. These include performing risk assessments, validating all user input to applications, ensuring that error messages don't reveal internal information, and securing sessions, traffic, and cookies if they are used. OWASP and other organizations provide up-to-date guidance on common issues as well as current best practices, allowing security professionals and developers to stay up to date.
Security testing and code review can help to improve an application's security and code quality. Pair programming, over-the-shoulder code review, pass-around code reviews, and tool-assisted code reviews are all common, but for formal review Fagan inspection remains the primary, but time-intensive, solution. Security testing may involve static or dynamic code analysis, fuzzing, fault injection, mutation testing, stress or load testing, or regression testing, with each providing specific functionality that can help ensure the security of an application.
Finally, web application security testing is conducted using both automated scanners known as web application vulnerability scanners, and by penetration testers and web application security testing professionals. Much like vulnerability scanning, using application-scanning tools provides a recurring view of the application's security profile and monitors for changes due to patches, configuration changes, or other new issues.
Be familiar with the software development life cycle (SDLC). SDLC models include Waterfall, Spiral, Agile, and RAD. Each model covers phases like feasibility, requirements gathering, design, development, testing and integration, deployment and training, operations, and eventual decommissioning, although they may not always occur in the same order or at the same time.
Explain how designing information security into applications occurs in each phase of the SDLC. Coding best practices and understanding common software issues are important to prevent security flaws. Version control helps to prevent issues that exist in older code versions from reappearing in new code. Code review models like over-the-shoulder and pair programming, as well as formal review using Fagan inspection, are used to validate the quality and security of code.
Define the purpose of security testing. The majority of code has critical flaws, making testing a necessity. Static testing targets source code, whereas dynamic testing tests the application itself. Fuzzing, fault injection, mutation testing, stress and load testing, as well as security regression testing are all common testing methods. Web applications are tested using web application vulnerability scanners as well as via manual methods to ensure that they are secure and that no new vulnerabilities have been added by configuration changes or patches.
Know how hardware security interacts with software to provide a trusted computing environment. Hardware trust starts at the foundry or manufacturer. Hardware modules like HSM and TPM modules can provide cryptographic and other security services to help systems remain secure. Firmware and hardware security features like eFuse, trusted execution environments, and secure enclaves provide ways for hardware and software developers to leverage security features.
In this exercise you will use the Acunetix web vulnerability scanner to scan a sample site and then review the data generated.
Part 1: Download and install the Acunetix scanner
Acunetix provides their Web Vulnerability scanner as a 14-day limited term trial download. You can download it at www.acunetix.com/vulnerability-scanner/download/
.
Part 2: Select an application and scan it
When you download the Acunetix scanner, you will receive an email listing Acunetix-hosted vulnerable sites. Select one of these sites and use the vulnerability scanner to scan it. Once it is complete, review the report that was generated by the scan.
Part 3: Analyze the scan results
Review the scan results and answer the following questions.
OWASP in partnership with Mandiant provides the OWASP Broken Web Applications project virtual machine. This VM includes very vulnerable web applications as a VMware VM, including WebGoat, OWASP's web application vulnerability learning environment.
Step 1: Download the VMware VM
Step 2: Run the VMware VM and start WebGoat
www.vmware.com/products/vsphere-hypervisor.html
, or the 30-day demo of Workstation Player from www.vmware.com/products/player/playerpro-evaluation.html
.ifconfig
to determine your system's IP address.Step 3: Succeed with an attack
https://github.com/WebGoat/WebGoat/wiki/(Almost)-Fully-Documented-Solution-(en)
, or visit YouTube, where you'll find numerous videos that show step-by-step guides to the solutions.Match each of the following terms to the correct description.
Subversion | The first SDLC model, replaced in many organizations but still used for very complex systems |
Agile | A formal code review process that relies on specified entry and exit criteria for each phase |
Dynamic code analysis | An Agile term that describes the list of features needed to complete a project |
Fuzzing | A source control management tool |
Fagan inspection | A code review process that requires one developer to explain their code to another developer |
Over-the-shoulder | An SDLC model that relies on sprints to accomplish tasks based on user stories |
Waterfall | A code analysis done using a running application that relies on sending unexpected data to see if the application fails |
Backlog | A code analysis that is done using a running application |
strcpy
function?