Continuous development processes

There are some processes that can greatly streamline your development and reduce a time in getting the application ready to be released or deployed to the production environment. They often have continuous in their name, and we will discuss the most important and popular ones in this section. It is important to highlight that they are strictly technical processes, so they are almost unrelated to project management technologies, although they can highly dovetail with the latter.

The most important processes we will mention are:

  • Continuous integration
  • Continuous delivery
  • Continuous deployment

The order of listing is important because each one of them is an extension of the previous one. Continuous deployment could be simply considered even a variation of continuous delivery. We will discuss them separately anyway, because what is only a minor difference for one organization may be critical in others.

The fact that these are technical processes means that their implementation strictly depends on the usage of proper tools. The general idea behind each of them is rather simple, so you could build your own continuous integration/delivery/deployment tools, but the best approach is to choose something that is already built. This way, you can focus more on building your product instead of the tool chain for continuous development.

Continuous integration

Continuous integration, often abbreviated as CI, is a process that takes benefit from automated testing and version control systems to provide a fully automatic integration environment. It can be used with centralized version control systems but in practice it spreads its wings only when a good DVCS tool is being used to manage the code.

Setting up a repository is the first step towards continuous integration, which is a set of software practices that have emerged from eXtreme Programming (XP). The principles are clearly described on Wikipedia (http://en.wikipedia.org/wiki/Continuous_integration#The_Practices) and define a way to make sure the software is easy to build, test, and deliver.

The first and most important requirement to implement continuous integration is to have a fully automated workflow that can test the whole application in the given revision in order to decide if it is technically correct. Technically correct means that it is free of known bugs and that all the features work as expected.

The general idea behind CI is that tests should always be run before merging to the mainstream development branch. This could be handled only through formal arrangements in the development team, but practice shows that this is not a reliable approach. The problem is that, as programmers, we tend to be overconfident and are unable to look critically at our code. If continuous integration is built only on team arrangements, it will inevitably fail because some of the developers will eventually skip their testing phase and commit possibly faulty code to the mainstream development branch that should always remain stable. And, in reality, even simple changes can introduce critical issues.

The obvious solution is to utilize a dedicated build server that automatically runs all the required application tests whenever the codebase changes. There are many tools that streamline this process and they can be easily integrated with version control hosting services such as GitHub or Bitbucket and self-hosted services such as GitLab. The benefit of using such tools is that the developer may locally run only the selected subset of tests (that, according to him, are related to his current work) and leave a potentially time consuming whole suite of integration tests for the build server. This really speeds up the development but still reduces the risk that new features will break the existing stable code found in the mainstream code branch.

Another plus of using a dedicated build server is that tests can be run in the environment that is closer to the production. Developers should also use environments that match the production as closely as possible and there are great tools for that (Vagrant, for instance); it is, however, hard to enforce this in any organization. You can easily do that on one dedicated build server or even on a cluster of build servers. Many CI tools make that even less problematic by utilizing various virtualization tools that help to ensure that tests are run always in the same, and completely fresh, testing environment.

Having a build server is also a must if you create desktop or mobile applications that must be delivered to users in binary form. The obvious thing to do is to always perform such a building procedure in the same environment. Almost every CI system takes into account the fact that applications often need to be downloaded in binary form after testing/building is done. Such building results are commonly referred to as build artifacts.

Because CI tools originated in times where most of the applications were written in compiled languages, they mostly use the term "building" to describe their main activity. For languages such as C or C++, this is obvious because applications cannot be run and tested if it is not built (compiled). For Python, this makes a bit less sense because most of the programs are distributed in a source form and can be run without any additional building step. So, in the scope of our language, the building and testing terms are often used interchangeably when talking about continuous integration.

Testing every commit

The best approach to continuous integration is to perform the whole test suite on every change pushed to the central repository. Even if one programmer pushed a series of multiple commits in a single branch, it very often makes sense to test each change separately. If you decide to test only the latest changeset in a single repository push, then it will be harder to find sources of possible regression problems introduced somewhere in the middle.

Of course, many DVCS such as Git or Mercurial allow you to limit time spent on searching regression sources by providing commands to bisect the history of changes, but in practice it is much more convenient to do that automatically as part of your continuous integration process.

Of course there is the issue of projects that have very long running test suites that may require tens of minutes or even hours to complete. One server may be not enough to perform all the builds on every commit made in the given time frame. This will make waiting for results even longer. In fact, long running tests is a problem on its own that will be described later in the Problem 2 – too long building time section. For now, you should know that you should always strive to test every commit pushed to the repository. If you have no power to do that on a single server, then set up the whole building cluster. If you are using a paid service, then pay for a higher pricing plan with more parallel builds. Hardware is cheap. Your developers' time is not. Eventually, you will save more money by having faster parallel builds and a more expensive CI setup than you would save on skipping tests for selected changes.

Merge testing through CI

Reality is complicated. If the code on a feature branch passes all the tests, it does not mean that the build will not fail when it is merged to a stable mainstream branch. Both of the popular branching strategies mentioned in the Git flow and GitHub flow sections assume that code merged to the master branch is always tested and deployable. But how can you be sure that this assumption is met if you have not perform the merge yet? This is a lesser problem for Git flow (if implemented well and used precisely) due to its emphasis on release branches. But it is a real problem for the simple GitHub flow where merging to master is often related with conflicts and is very likely to introduce regressions in tests. Even for Git flow, this is a serious concern. This is a complex branching model, so for sure people will make mistakes when using it. So, you can never be sure that the code on master will pass the tests after merging if you won't take the special precautions.

One of the solutions to this problem is to delegate the duty of merging feature branches into a stable mainstream branch to your CI system. In many CI tools, you can easily set up an on-demand building job that will locally merge a specific feature branch to the stable branch and push it to the central repository only if it passed all the tests. If the build fails, then such a merge will be reverted, leaving the stable branch untouched. Of course, this approach gets more complex in fast paced projects where many feature branches are developed simultaneously because there is a high risk of conflicts that can't be resolved automatically by any CI system. There are, of course, solutions to that problem, like rebasing in Git.

Such an approach to merging anything into the stable branch in a version control system is practically a must if you are thinking about going further and implementing continuous delivery processes. It is also required if you have a strict rule in your workflow stating that everything in a stable branch is releasable.

Matrix testing

Matrix testing is a very useful tool if your code needs to be tested in different environments. Depending on your project needs, the direct support of such a feature in your CI solution may be less or more required.

The easiest way to explain the idea of matrix testing is to take the example of some open source Python package. Django, for instance, is the project that has a strictly specified set of supported Python language versions. The 1.9.3 version lists the Python 2.7, Python 3.4, and Python 3.5 versions as required in order to run Django code. This means that every time Django core developers make a change to the project, the full tests suite must be executed on these three Python versions in order to back this claim. If even a single test fails on one environment, the whole build must be marked as failed because the backwards compatibility constraint was possibly broken. For such a simple case, you do not need any support from CI. There is a great Tox tool (refer to https://tox.readthedocs.org/) that, among other features, allows you to easily run test suites in different Python versions in isolated virtual environments. This utility can also be easily used in local development.

But this was only the simplest example. It is not uncommon that the application must be tested in multiple environments where completely different parameters must be tested. To name a few:

  • Different operating systems
  • Different databases
  • Different versions of backing services
  • Different types of filesystems

The full set of combinations forms a multi-dimensional environment parameter matrix, and this is why such a setup is called matrix testing. When you need such a deep testing workflow, it is very possible that you require some integrated support for matrix testing in your CI solution. With a large number of possible combinations, you will also require a highly parallelizable building process because every run over the matrix will require a large amount of work from your building server. In some cases, you will be forced to do some tradeoff if your test matrix has too many dimensions.

Continuous delivery

Continuous delivery is a simple extension of the continuous integration idea. This approach to software engineering aims to ensure that the application can be released reliably at any time. The goal of continuous delivery is to release software in short circles. It generally reduces both costs and the risk of releasing software by allowing the incremental delivery of changes to the application in production.

The main prerequisites for building successful continuous delivery processes are:

  • A reliable continuous integration process
  • An automated process of deployment to the production environment (if the project has a notion of the production environment)
  • A well-defined version control system workflow or branching strategy that allows you to easily define what version of software represents releasable code

In many projects, the automated tests are not enough to reliably tell if the given version of the software is really ready to be released. In such cases, the additional manual user acceptance tests are usually performed by skilled QA staff. Depending on your project management methodology, this may also require some approval from the client. This does not mean that you can't use Git flow, GitHub flow, or a similar branching strategy, if some of your acceptance tests must be performed manually by humans. This only changes the semantics of your stable and release branches from ready to be deployed to ready for user acceptance tests and approval.

Also, the previous paragraph does not change the fact that code deployment should always be automated. We already discussed some of the tools and benefits of automation in Chapter 6, Deploying Code. As stated there, it will always reduce the cost and risk of a new release. Also, most of the available CI tools allow you to set up special build targets that, instead of testing, will perform automated deployment for you. In most continuous delivery processes, this is usually triggered manually (on demand) by authorized staff members when they are sure there is required approval and all acceptance tests ended with success.

Continuous deployment

Continuous deployment is a process that takes continuous delivery to the next level. It is a perfect approach for projects where all acceptance tests are automated and there is no need for manual approval from the client. In short, once code is merged to the stable branch (usually master), it is automatically deployed to the production environment.

This approach seems to be very nice and robust but is not often used because it is very hard to find a project that does not need manual QA testing and someone's approval before a new version is released. Anyway, it is definitely doable and some companies claim to be working in that way.

In order to implement continuous deployment, you need the same basic prerequisites as the continuous delivery process. Also, a more careful approach to merging into a stable branch is very often required. What gets merged into the master in continuous integration usually goes instantly to the production. Because of that, it is reasonable to handoff the merging task to your CI system, as explained in the Merge testing through CI section.

Popular tools for continuous integration

There is a tremendous variety of choices for CI tools nowadays. They greatly vary on ease of use and available features, and almost each one of them has some unique features that others will lack. So, it is hard to give a good general recommendation because each project has completely different needs and also a different development workflow. There are, of course, some great free and open source projects, but paid hosted services are also worth researching. It's because although open source software such as Jenkins or Buildbot are freely available to install without any fee, it is false thinking that they are free to run. Both hardware and maintenance are added costs of having your own CI system. In some circumstances, it may be less expensive to pay for such a service instead of paying for additional infrastructure and spending time on resolving any issues in open source CI software. Still, you need to make sure that sending your code to any third-party service is in line with security policies at your company.

Here we will review some of the popular free open source tools, as well as paid hosted services. I really don't want to advertise any vendor, so we will discuss only those that are available without any fees for open source projects to justify this rather subjective selection. No best recommendation will be given, but we will point out both the good and bad sides of any solution. If you are still in doubt, the next section that describes common continuous integration pitfalls should help you in making good decisions.

Jenkins

Jenkins (https://jenkins-ci.org) seems to be the most popular tool for continuous integration. It is also one of the oldest open source projects in this field, in pair with Hudson (the development of these two projects split and Jenkins is a fork of Hudson).

Jenkins

Figure 7 Preview of Jenkins main interface

Jenkins is written in Java and was initially designed mainly for building projects written in the Java language. It means that for Java developers, it is a perfect CI system, but you will need to struggle a bit if you want to use it with other technology stack.

One big advantage of Jenkins is its very extensive list of features that Jenkins have implemented straight out of the box. The most important one, from the Python programmer's point of view, is the ability to understand test results. Instead of giving only plain binary information about build success, Jenkins is able to present the results of all tests that were executed during a run in the form of tables and graphs. This will, of course, not work automatically and you need to provide those results in a specific format (by default, Jenkins understands JUnit files) during your build. Fortunately, a lot of Python testing frameworks are able to export results in a machine-readable format.

The following is an example presentation of unit test results in Jenkins in its web UI:

Jenkins

Figure 8 Presentation of unit test results in Jenkins

The following screenshot illustrates how Jenkins presents additional build information such as trends or downloadable artifacts:

Jenkins

Figure 9 Test result trends graph on example Jenkins project

Surprisingly, most of Jenkins' power does not come from its built-in features but from a huge repository of free plugins. What is available from clean installation may be great for Java developers but programmers using different technologies will need to spend a lot of time to make it suited for their project. Even support for Git is provided by some plugin.

It is great that Jenkins is so easily extendable, but this has also some serious downsides. You will eventually depend on installed plugins to drive your continuous integration process and these are developed independently from Jenkins core. Most authors of popular plugins try to keep them up to date and compatible with the latest releases of Jenkins. Nevertheless, the extensions with smaller communities will be updated less frequently, and some day you may be either forced to resign from them or postpone the update of the core system. This may be a real problem when there is urgent need for an update (security fix, for instance), but some of the plugins that are critical for your CI process will not work with the new version.

The basic Jenkins installation that provides you with a master CI server is also capable of performing builds. This is different from other CI systems that put more emphasis on distribution and create a strict separation from master and slave build servers. This is both good and bad. On the one side, it allows you to set up a wholly working CI server in a few minutes. Jenkins, of course, supports deferring work to build slaves, so you can scale out in future whenever it is needed. On the other hand, it is very common that Jenkins is underperforming because it is deployed in single-server settings, and its users complain regarding performance without providing it enough resources. It is not hard to add new building nodes to the Jenkins cluster. It seems that this is rather a mental challenge than a technical problem for those that got used to the single-server setup.

Buildbot

Buildbot (http://buildbot.net/) is a software written in Python that automates the compile and test cycles for any kind of software project. It is configurable in a way that every change made on a source code repository generates some builds and launches some tests and then provides some feedback:

Buildbot

Figure 10 Buildbot's Waterfall view for CPython 3.x branch

This tool is used, for instance, by CPython core and can be seen at http://buildbot.python.org/all/waterfall?&category=3.x.stable.

The default Buildbot's representation of build results is a Waterfall view, as shown in Figure 10. Each column corresponds to a build composed of steps and is associated with some build slaves. The whole system is driven by the build master:

  • The build master centralizes and drives everything
  • A build is a sequence of steps used to build an application and run tests over it
  • A step is an atomic command, for example:
    • Check out the files of a project
    • Build the application
    • Run tests

A build slave is a machine that is in charge of running a build. It can be located anywhere as long as it can reach the build master. Thanks to this architecture, Buildbot scales very well. All of heavy lifting is done on build slaves and you can have as many of them as you want.

Very simple and clear design makes Buildbot very flexible. Each build step is just a single command. Buildbot is written in Python but it is completely language agnostic. So the build step can be absolutely anything. The process exit code is used to decide if the step ended as a success and all standard output of the step command is captured by default. Most of the testing tools and compilers follow good design practices, and they indicate failures with proper exit codes and return readable error/warning messages on sdout or stderr output streams. If it's not true, you can usually easily wrap them with a Bash script. In most cases, this is a simple task. Thanks to this, a lot of projects can be integrated with Buildbot with only minimal effort.

The next advantage of Buildbot is that it supports many version control systems out of the box without the need to install any additional plugins:

  • CVS
  • Subversion
  • Perforce
  • Bzr
  • Darcs
  • Git
  • Mercurial
  • Monotone

The main disadvantage of Buildbot is its lack of higher-level presentation tools for presenting build results. For instance, other projects, such as Jenkins, can take the notion of unit tests run during the build. If you feed them with test results data presented in the proper format (usually XML), they can present all the tests in a readable form like tables and graphs. Buildbot does not have such a built-in feature and this is the price it pays for its flexibility and simplicity. If you need some extra bells and whistles, you need to build them by yourself or search for some custom extension. On the other hand, thanks to such simplicity, it is easier to reason about Buildbot's behavior and maintain it. So, there is always a tradeoff.

Travis CI

Travis CI (https://travis-ci.org/) is a continuous integration system sold in Software as a Service form. It is a paid service for enterprises but can be used completely for free in open source projects hosted on GitHub.

Travis CI

Figure 11 Travis CI page for django-userena project showing failed builds in its build matrix

Naturally, this is the free part of its pricing plan that made it very popular. Currently, it is one of the most popular CI solutions for projects hosted on GitHub. But the biggest advantage over older projects such as Buildbot or Jenkins, is how the build configuration is stored. All build definition is provided in a single .travis.yml file in the root of the project repository. Travis works only with GitHub, so if you have enabled such integration, your project will be tested on every commit if there is only a .travis.yml file.

Having the whole CI configuration for a project in its code repository is really a great approach. This makes the whole process a lot clearer for the developers and also allows for more flexibility. In systems where build configuration must be provided to build a server separately (using web interface or through server configuration), there is always some additional friction when something new needs to be added to the testing rig. In some organizations, where only selected staff are authorized to maintain the CI system, this really slows the process of adding new build steps down. Also, sometimes there is a need to test different branches of the code with completely different procedures. When build configuration is available in project sources, it is a lot easier to do so.

The other great feature of Travis is the emphasis it puts on running builds in clean environments. Every build is executed in a completely fresh virtual machine, so there is no risk of some persisted state that would affect build results. Travis uses a rather big virtual machine image, so you have a lot of open source software and programming environments available without the need of additional installs. In this isolated environment, you have full administrative rights so you can download and install anything you need to perform your build and the syntax of the .travis.yml file makes it very easy. Unfortunately, you do not have a lot of choice over the operating system available as the base of your testing environment. Travis does not allow to provide your own virtual machine images, so you must rely on the very limited options provided. Usually there is no choice at all and all the builds must be done in some version of Ubuntu or Mac OS X (still experimental at the time of writing the book). Sometimes there is an option to select some legacy version of the system or the preview of the new testing environment, but such a possibility is always temporary. There is always a way to bypass this. You can run another virtual machine inside of the one provided by Travis. This should be something that allows you to easily encode VM configuration in your project sources such as Vagrant or Docker. But this will add more time to your builds, so it is not the best approach you will take. Stacking virtual machines that way may not be the best and most efficient approach if you need to perform tests under different operating systems. If this is an important feature for you, then this is a sign that Travis is not a service for you.

The biggest downside of Travis is that it is completely locked to GitHub. If you would like to use it in your open source project, then this is not a big deal. For enterprises and closed source projects, this is mostly an unsolvable issue.

GitLab CI

GitLab CI is a part of a larger GitLab project. It is available as both a paid service (Enterprise Edition) and an open source project that you may host on your own infrastructure (Community Edition). The open source edition lacks some of the paid service features, but in most cases is everything that any company needs from the software that manages version control repositories and continuous integration.

GitLab CI is very similar in feature sets to the Travis. It is even configured with a very similar YAML syntax stored in the .gitlab-ci.yml file. The biggest difference is that the GitLab Enterprise Edition pricing model does not provide you with free accounts for open source projects. The Community Edition is open source by itself but you need to have some own infrastructure in order to run it.

When compared with Travis, the GitLab has an obvious advantage of having more control over the execution environment. Unfortunately, in the area of environment isolation, the default build runner in GitLab is a bit inferior. The process called Gitlab Runner executes all the build steps in the same environment it is run in, so it works more like Jenkins' or Buildbot's slave servers. Fortunately, it plays well with Docker, so you can easily add more isolation with container-based virtualization, but this will require some effort and additional setup. In Travis, you get full isolation out of the box.

Choosing the right tool and common pitfalls

As already said, there is no perfect CI tool that will suit every project and, most importantly, every organization and workflow it uses. I can give only a single suggestion for open source projects hosted on GitHub. For small code bases with platform independent code, Travis CI seems like the best choice. It is easy to start with and will give you almost instant gratification with a minimal amount of work.

For projects with closed sources, the situation is completely different. It is possible that you will need to evaluate a few CI systems in various setups until you are able decide which one is best for you. We discussed only four of the popular tools but it should be a rather representative group. To make your decision a bit easier, we will discuss some of the common problems related to continuous integration systems. In some of the available CI systems, it is more possible to make certain kinds of mistakes than in others. On the other hand, some of the problems may not be important to every application. I hope that by combining the knowledge of your needs with this short summary, it will be easier to make your first decision the right one.

Problem 1 – too complex build strategies

Some organizations like to formalize and structure things beyond the reasonable levels. In companies that create computer software, this is especially true in two areas: project management tools and build strategies on CI servers.

Excessive configuration of project management tools usually ends with issue processing workflows on JIRA (or any other management software) so complicated that they will never fit a single wall when expressed as graphs. If your manager has such configuration/control mania, you can either talk to him or switch him for another manager (read: quit your current job). Unfortunately, this does not reliably ensure any improvement in that matter.

But when it comes to CI, we can do more. Continuous integration tools are usually maintained and configured by us: developers. These are OUR tools that are supposed to improve OUR work. If someone has irresistible temptation to toggle every switch and turn every knob possible, then he should be kept away from configuration of CI systems, especially if his main job is to talk the whole day and make decisions.

There is really no need for making complex strategies to decide which commit or branch should be tested. No need to limit testing to specific tags. No need to queue commits in order to perform larger builds. No need to disable building via custom commit messages. Your continuous integration process should be simple to reason about. Test everything! Test always! That's all! If there are not enough hardware resources to test every commit, then add more hardware. Remember that the programmer's time is more expensive than silicon chips.

Problem 2 – too long building time

Long building times is a thing that kills performance of any developer. If you need to wait hours to know if your work was done properly, then there is no way you can be productive. Of course, having something else to do when your feature is being tested helps a lot. Anyway, as humans, we are really terrible at multitasking. Switching between different problems takes time and, in the end, reduces our programming performance to zero. It's simply hard to keep focus when working on multiple problems at once.

The solution is very simple: keep your builds fast at any price. At first, try to find bottlenecks and optimize them. If the performance of build servers is the problem, then try to scale out. If this does not help, then split each build into smaller parts and parallelize.

There are plenty of solutions to speed up slow build tests, but sometimes nothing can be done about that problem. For instance, if you have automated browser tests or need to perform long running calls to external services, then it is very hard to improve performance beyond some hard limit. For instance, when speed of automated acceptance test in your CI becomes a problem, then you can loosen the test everything, test always rule a bit. What matters the most for programmers are usually unit tests and static analysis. So, depending on your workflow, the slow browser tests may be sometimes deferred in time to the moment when release is being prepared.

The other solution to slow build runs is rethinking the overall architecture design of your application. If testing the application takes a lot of time, it is very often a sign that it should be split into a few independent components that can be developed and tested separately. Writing software as huge monoliths is one of the shortest paths to failure. Usually any software engineering process breaks on software that is not modularized properly.

Problem 3 – external job definitions

Some continuous integration systems, especially Jenkins, allow you to set up most of the build configurations and testing processes completely through web UI, without the need to touch the code repository. But you should really avoid putting anything more than simple entry points to the build steps/commands into externals systems. This is the kind of CI anti-pattern that can cause nothing more than troubles.

Your building and testing process is usually tightly tied to your codebase. If you store its whole definition in external system such as Jenkins or Buildbot, then it will be really hard to introduce changes to that process.

As an example of a problem introduced by global external build definition, let's assume that we have some open source project. The initial development was hectic and we did not care for any style guidelines. Our project was successful, so the development required another major release. After some time, we moved from 0.x version to 1.0 and decided to reformat all of your code to conform to PEP 8 guidelines. It is a good approach to have a static analysis check as part of CI builds, so we decided to add the execution of the pep8 tool to our build definition. If we had only a global external build configuration, then there would be a problem if some improvement needs to be done to the code in older versions. Let's say that there is a critical security issue that needs to be fixed in both branches of the application: 0.x and 1.y. We know that anything below version 1.0 wasn't compliant with the style guide and the newly introduced check against PEP 8 will mark the build as failed.

The solution to the problem is to keep the definition of your build process as close to the source as possible. With some CI systems (Travis CI and GitLab CI), you get that workflow by default. With other solutions (Jenkins and Buildbot) you need to take additional care in order to ensure that most of the build processes are included in your code instead of some external tool configuration. Fortunately, you have a lot of choices that allow that kind of automation:

  • Bash scripts
  • Makefiles
  • Python code

Problem 4 – lack of isolation

We have discussed the importance of isolation when programming in Python many times already. We know that the best approach to isolate Python execution environment on the package level is to use virtual environments with virtualenv or python -m venv. Unfortunately, when testing code for the purpose of continuous integration processes, it is usually not enough. The testing environment should be as close as possible to the production environment and it is really hard to achieve that without additional system-level virtualization.

The main issues you may experience when not ensuring proper system-level isolation when building your application are:

  • Some state persisted between builds either on the filesystem or in backing services (caches, databases, and so on)
  • Multiple builds or tests interfacing with each other through the environment, filesystem or backing services
  • Problems that would occur due to specific characteristics of the production operating system not caught on the build server

The preceding issues are particularly troublesome if you need to perform concurrent builds of the same application or even parallelize single builds.

Some Python frameworks (mostly Django) provide some additional level of isolation for databases that try to ensure the storage will be cleaned before running tests. There is also quite a useful extension for py.test called pytest-dbfixtures (refer to https://github.com/ClearcodeHQ/pytest-dbfixtures) that allows you to achieve that even more reliably. Anyway, such solutions add even more complexity to your builds instead of reducing it. Always clearing the virtual machine on every build (in the style of Travis CI) seems like a more elegant and simpler approach.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset