© Moritz Lenz 2019
Moritz LenzPython Continuous Integration and Deliveryhttps://doi.org/10.1007/978-1-4842-4281-0_12

12. Security

Moritz Lenz1 
(1)
Fürth, Bayern, Germany
 

What’s the impact of automated deployment on the security of your applications and infrastructure? It turns out there are both security advantages and things to be wary of.

12.1 The Dangers of Centralization

In a deployment pipeline, the machine that controls the deployment must have access to the target machines where the software is deployed. In the simplest case, there is a private SSH key on the deployment machine, and the target machines grant access to the owner of that key.

This is an obvious risk, because an attacker gaining access to the deployment machine (the GoCD agent or the GoCD server controlling the agent) can use this key to connect to all the target machines, gaining full control over them.

Some possible mitigations include the following:
  • Implement a hardened setup of the deployment machine (for example, with SELinux or grsecurity).

  • Password-protect the SSH key and supply the password through the same channel that triggers the deployment, such as through an encrypted variable from the GoCD server.

  • Use a hardware token for storing SSH deployments keys. Hardware tokens can be safe against software-based key extraction.

  • Have separate deployment and build hosts. Build hosts tend to require far more software installed, which exposes a bigger attack surface.

  • You can also have separate deployment machines for each environment, with separate credentials.

  • On the target machines, allow only unprivileged access through said SSH key and use something like sudo, to allow only certain privileged operations.

Each of these mitigations has its own costs and weaknesses. To illustrate this point, note that password-protecting SSH keys helps if the attacker only manages to obtain a copy of the file system, but not if the attacker gains root privileges on the machine and, thus, can obtain a memory dump that includes the decrypted SSH key.

A hardware-based storage of secrets provides good protection against keys’ theft, but it makes use of virtual systems harder and must be purchased and configured.

The sudo approach is very effective at limiting the spread of an attack, but it requires extensive configuration on the target machine, and you need a secure way to deploy that. So, you run into a chicken-and-egg problem that involves some extra effort.

On the flip side, if you don’t have a delivery pipeline, deployments have to occur manually. So, now you have the same problem of having to give humans access to the target machines. Most organizations offer some kind of secured machine on which the operator’s SSH keys are stored, and you face the same risk with that machine as the deployment machine.

12.2 Time to Market for Security Fixes

Compared to manual deployments, even a relatively slow deployment pipeline is still quite fast. When a vulnerability is identified, this quick and automated rollout process can make a big difference in reducing the time until the fix is deployed.

Equally important is the fact that a clunky manual release process seduces the operators into taking shortcuts around security fixes, thus skipping some steps of the quality-assurance process. When that process is automated and fast, it is easier to adhere to the process than to skip it, so it will actually be carried out even in stressful situations.

12.3 Audits and Software Bill of Materials

A good deployment pipeline tracks when which version of a software package was built and deployed. This allows one to answer questions such as “How long did we have this security hole?”, “How soon after the issue was reported was the vulnerability patched in production?”, and maybe even “Who approved the change that introduced the vulnerability?”

If you also use configuration management based on files that are stored in a version control system, you can answer these questions even for configuration, not just for software versions.

In short, the deployment pipeline provides enough data for an audit.

Some legislation requires you to record a software bill of materials1 in some contexts, for example, for medical device software. This is a record of the components contained in your software, such as a list of libraries and their versions. While this is important for assessing the impact of a license violation, it is also important for figuring out which applications are affected by a vulnerability in a particular version of a library.

A 2015 report by HP Security found that 44% of the investigated breaches were made possible by vulnerabilities that have been known (and presumably patched) for at least two years. This, in turn, means that you can nearly halve your security risk by tracking which software version you use where, subscribe to a newsletter or feed of known vulnerabilities, and rebuild and redeploy your software with patched versions on a regular basis.

A continuous delivery system doesn’t automatically create such a software bill of materials for you, but it gives you a place where you can plug in a system that does.

12.4 Summary

Continuous delivery provides the ability to react quickly and predictably to newly discovered vulnerabilities. At the same time, the deployment pipeline itself is an attack surface, which, if not properly secured, can be an attractive target for an intruder.

Finally, the deployment pipeline can help you to collect data that can offer insight into the use of software with known vulnerabilities, allowing you to be thorough when patching these security holes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset