Chapter 8. Deploying Spring Boot applications

This chapter covers

  • Deploying WAR files
  • Database migration
  • Deploying to the cloud

Think of your favorite action movie. Now imagine going to see that movie in the theater and being taken on a thrilling audio-visual ride with high-speed chases, explosions, and battles, only to have it come to a sudden end just before the good guys take down the bad guys. Instead of seeing the movie’s conflict resolved, the theater lights come on and everyone is ushered out the door.

Although the lead-up was exciting, it’s the climax of the movie that’s important. Without it, it’s action for action’s sake.

Now imagine developing applications and putting a lot of effort and creativity into solving the business problem, but then never deploying the application for others to use and enjoy. Sure, most applications we write don’t involve car chases or explosions (at least I hope not), but there’s a certain rush we get along the way. Of course, not every line of code we write is destined for production, but it’d be a big letdown if none of it ever was deployed.

Up to this point we’ve been focused on using features of Spring Boot that help us develop an application. There have been some exciting steps along the way. But it’s all for nothing if we don’t cross the finish line and deploy the application.

In this chapter we’re going to step beyond developing applications with Spring Boot and look at how to deploy those applications. Although this may seem obvious for anyone who has ever deployed a Java-based application, there are some unique features of Spring Boot and related Spring projects we can draw on that make deploying Spring Boot applications unique.

In fact, unlike most Java web applications, which are typically deployed to an application server as WAR files, Spring Boot offers several deployment options. Before we look at how to deploy a Spring Boot application, let’s consider all of the options and choose a few that suit our needs best.

8.1. Weighing deployment options

There are several ways to build and run Spring Boot applications. You’ve already seen a few of them:

  • Running the application in the IDE (either Spring ToolSuite or IntelliJ IDEA)
  • Running from the command line using the Maven spring-boot:run goal or Gradle bootRun task
  • Using Maven or Gradle to produce an executable JAR file that can be run at the command line
  • Using the Spring Boot CLI to run Groovy scripts at the command line
  • Using the Spring Boot CLI to produce an executable JAR file that can be run at the command line

Any of these choices is suitable for running the application while you’re still developing it. But what about when you’re ready to deploy the application into a production or other non-development environment?

Although none of the choices listed seems fitting for deploying an application beyond development, the truth is that all but one of them is a valid choice. Running an application within the IDE is certainly ill-suited for a production deployment. Executable JAR files and the Spring Boot CLI, however, are still on the table and are great choices when deploying to a cloud environment.

That said, you’re probably wondering how to deploy a Spring Boot application to a more traditional application server environment such as Tomcat, WebSphere, or WebLogic. In those cases, executable JAR files and Groovy source code won’t work. For application server deployment, you’ll need your application wrapped up in a WAR file.

As it turns out, Spring Boot applications can be packaged for deployment in several ways, as described in table 8.1.

Table 8.1. Spring Boot deployment choices

Deployment artifact

Produced by

Target environment

Raw Groovy source Written by hand Cloud Foundry and container deployment, such as with Docker
Executable JAR Maven, Gradle, or Spring Boot CLI Cloud environments, including Cloud Foundry and Heroku, as well as container deployment, such as with Docker
WAR Maven or Gradle Java application servers or cloud environments such as Cloud Foundry

As you can see in table 8.1, your target environment will need to be a factor in your choice. If you’re deploying to a Tomcat server running in your own data center, then the choice of a WAR file has been made for you. On the other hand, if you’ll be deploying to Cloud Foundry, you’re welcome to choose any of the deployment options shown.

In this chapter, we’re going to focus our attention on the following options:

  • Deploying a WAR file to a Java application server
  • Deploying an executable JAR file to Cloud Foundry
  • Deploying an executable JAR file to Heroku (where the build is performed by Heroku)

As we explore these scenarios, we’re also going to have to deal with the fact that we’ve been using an embedded H2 database as we’ve developed the application, and we’ll look at ways to replace it with a production-ready database.

To get started, let’s take a look at how we can build our reading-list application into a WAR file that can be deployed to a Java application server such as Tomcat, WebSphere, or WebLogic.

8.2. Deploying to an application server

Thus far, every time we’ve run the reading-list application, the web application has been served from a Tomcat server embedded in the application. Compared to a conventional Java web application, the tables were turned. The application has not been deployed in Tomcat; rather, Tomcat has been deployed in the application.

Thanks in large part to Spring Boot auto-configuration, we’ve not been required to create a web.xml file or servlet initializer class to declare Spring’s DispatcherServlet for Spring MVC. But if we’re going to deploy the application to a Java application server, we’re going to need to build a WAR file. And so that the application server will know how to run the application, we’ll also need to include a servlet initializer in that WAR file.

8.2.1. Building a WAR file

As it turns out, building a WAR file isn’t that difficult. If you’re using Gradle to build the application, you simply must apply the “war” plugin:

apply plugin: 'war'

Then, replace the existing jar configuration with the following war configuration in build.gradle:

war {
    baseName = 'readinglist'
    version = '0.0.1-SNAPSHOT'
}

The only difference between this war configuration and the previous jar configuration is the change of the letter j to w.

If you’re using Maven to build the project, then it’s even easier to get a WAR file. All you need to do is change the <packaging> element’s value from jar to war.

<packaging>war</packaging>

Those are the only changes required to produce a WAR file. But that WAR file will be useless unless it includes a web.xml file or a servlet initializer to enable Spring MVC’s DispatcherServlet.

Spring Boot can help here. It provides SpringBootServletInitializer, a special Spring Boot-aware implementation of Spring’s WebApplicationInitializer. Aside from configuring Spring’s DispatcherServlet, SpringBootServletInitializer also looks for any beans in the Spring application context that are of type Filter, Servlet, or ServletContextInitializer and binds them to the servlet container.

To use SpringBootServletInitializer, simply create a subclass and override the configure() method to specify the Spring configuration class. Listing 8.1 shows ReadingListServletInitializer, a subclass of SpringBootServletInitializer that we’ll use for the reading-list application.

Listing 8.1. Extending SpringBootServletInitializer for the reading-list application

As you can see, the configure() method is given a SpringApplicationBuilder as a parameter and returns it as a result. In between, it calls the sources() method to register any Spring configuration classes. In this case, it only registers the Application class, which, as you’ll recall, served dual purpose as both a bootstrap class (with a main() method) and a Spring configuration class.

Even though the reading-list application has other Spring configuration classes, it’s not necessary to register them all with the sources() method. The Application class is annotated with @SpringBootApplication, which implicitly enables component-scanning. Component-scanning will discover and pull in any other configuration classes that it finds.

Now we’re ready to build the application. If you’re using Gradle to build the project, simply invoke the build task:

$ gradle build

Assuming no problems, the build will produce a file named readinglist-0.0.1-SNAPSHOT.war in build/libs.

For a Maven-based build, use the package goal:

$ mvn package

After a successful Maven build, the WAR file will be found in the “target” directory.

All that’s left is to deploy the application. The deployment procedure varies across application servers, so consult the documentation for your application server’s specific deployment procedure.

For Tomcat, you can deploy an application by copying the WAR file into Tomcat’s webapps directory. If Tomcat is running (or once it starts up if it isn’t currently running), it will detect the presence of the WAR file, expand it, and install it.

Assuming that you didn’t rename the WAR file before deploying it, the servlet context path will be the same as the base name of the WAR file, or /readinglist-0.0.1-SNAPSHOT in the case of the reading-list application. Point your browser at http://server:_port_/readinglist-0.0.1-SNAPSHOT to kick the tires on the app.

One other thing worth noting: even though we’re building a WAR file, it may still be possible to run it without deploying to an application server. Assuming you don’t remove the main() method from Application, the WAR file produced by the build can also be run as if it were an executable JAR file:

$ java -jar readinglist-0.0.1-SNAPSHOT.war

In effect, you get two deployment options out of a single deployment artifact!

At this point, the application should be up and running in Tomcat. But it’s still using the embedded H2 database. An embedded database was handy while developing the application, but it’s not a great choice in production. Let’s see how to wire in a different data source when deploying to production.

8.2.2. Creating a production profile

Thanks to auto-configuration, we have a DataSource bean that references an embedded H2 database. More specifically, the DataSource bean is a data source pool, typically org.apache.tomcat.jdbc.pool.DataSource. Therefore, it may seem obvious that in order to use some database other than the embedded H2 database, we simply need to declare our own DataSource bean, overriding the auto-configured DataSource, to reference a production database of our choosing.

For example, suppose that we wanted to work with a PostgreSQL database running on localhost with the name “readingList”. The following @Bean method would declare our DataSource bean:

@Bean
@Profile("production")
public DataSource dataSource() {
  DataSource ds = new DataSource();
  ds.setDriverClassName("org.postgresql.Driver");
  ds.setUrl("jdbc:postgresql://localhost:5432/readinglist");
  ds.setUsername("habuma");
  ds.setPassword("password");
  return ds;
}

Here the DataSource type is Tomcat’s org.apache.tomcat.jdbc.pool.DataSource, not to be confused with javax.sql.DataSource, which it ultimately implements. The details required to connect to the database (including the JDBC driver class name, the database URL, and the database credentials) are given to the DataSource instance. With this bean declared, the default auto-configured DataSource bean will be passed over.

The key thing to notice about this @Bean method is that it is also annotated with @Profile to specify that it should only be created if the “production” profile is active. Because of this, we can still use the embedded H2 database while developing the application, but use the PostgreSQL database in production by activating the profile.

Although that should do the trick, there’s a better way to configure the database details without explicitly declaring our own DataSource bean. Rather than replace the auto-configured DataSource bean, we can configure it via properties in application.yml or application.properties. Table 8.2 lists all of the properties that are useful for configuring the DataSource bean.

Table 8.2. DataSource configuration properties

Property (prefixed with spring.datasource.)

Description

name The name of the data source
initialize Whether or not to populate using data.sql (default: true)
schema The name of a schema (DDL) script resource
data The name of a data (DML) script resource
sql-script-encoding The character set for reading SQL scripts
platform The platform to use when reading the schema resource (for example, “schema-{platform}.sql”)
continue-on-error Whether or not to continue if initialization fails (default: false)
separator The separator in the SQL scripts (default: ;)
driver-class-name The fully qualified class name of the JDBC driver (can often be automatically inferred from the URL)
url The database URL
username The database username
password The database password
jndi-name A JNDI name for looking up a datasource via JNDI
max-active Maximum active connections (default: 100)
max-idle Maximum idle connections (default: 8)
min-idle Minimum idle connections (default: 8)
initial-size The initial size of the connection pool (default: 10)
validation-query A query to execute to verify the connection
test-on-borrow Whether or not to test a connection as it’s borrowed from the pool (default: false)
test-on-return Whether or not to test a connection as it’s returned to the pool (default: false)
test-while-idle Whether or not to test a connection while it is idle (default: false)
time-between-eviction-runs-millis How often (in milliseconds) to evict connections (default: 5000)
min-evictable-idle-time-millis The minimum time (in milliseconds) that a connection can be idle before being tested for eviction (default: 60000)
max-wait The maximum time (in milliseconds) that the pool will wait when no connections are available before failing (default: 30000)
jmx-enabled Whether or not the data source is managed by JMX (default: false)

Most of the properties in table 8.2 are for fine-tuning the connection pool. I’ll leave it to you to tinker with those settings as you see fit. What we’re interested in now, however, is setting a few properties that will point the DataSource bean at PostgreSQL instead of the embedded H2 database. Specifically, the spring.datasource.url, spring.datasource.username, and spring.datasource.password properties are what we need.

As I’m writing this, I have a PostgreSQL database running locally, listening on port 5432, with a username and password of “habuma” and “password”. Therefore, the following “production” profile in application.yml is what I used:

---
spring:
  profiles: production
  datasource:
    url: jdbc:postgresql://localhost:5432/readinglist
    username: habuma
    password: password
  jpa:
    database-platform: org.hibernate.dialect.PostgreSQLDialect

Notice that this excerpt starts with --- and the first property set is spring.profiles. This indicates that the properties that follow will only be applied if the “production” profile is active.

Next, the spring.datasource.url, spring.datasource.username, and spring.datasource.password properties are set. Note that it’s usually unnecessary to set the spring.datasource.driver-class-name property, as Spring Boot can infer it from the value of the spring.datasource.url property. I also had to set some JPA properties. The spring.jpa.database-platform property sets the underlying JPA engine to use Hibernate’s PostgreSQL dialect.

To enable this profile, we’ll need to set the spring.profiles.active property to “production”. There are several ways to set this property, but the most convenient way is by setting a system environment variable on the machine running the application server. To enable the “production” profile before starting Tomcat, I exported the SPRING_PROFILES_ACTIVE environment variable like this:

$ export SPRING_PROFILES_ACTIVE=production

You probably noticed that SPRING_PROFILES_ACTIVE is different from spring.profiles.active. It’s not possible to export an environment variable with periods in the name, so it was necessary to alter the name slightly. From Spring’s point of view, the two names are equivalent.

We’re almost ready to deploy the application to an application server and see it run. In fact, if you are feeling adventurous, go ahead and try it. You’ll run into a small problem, however.

By default, Spring Boot configures Hibernate to create the schema automatically when using the embedded H2 database. More specifically, it sets Hibernate’s hibernate.hbm2ddl.auto to create-drop, indicating that the schema should be created when Hibernate’s SessionFactory is created and dropped when it is closed. But it’s set to do nothing if you’re not using an embedded H2 database. This means that our application’s tables won’t exist and you’ll see errors as it tries to query those nonexistent tables.

8.2.3. Enabling database migration

One option is to set the hibernate.hbm2ddl.auto property to create, create-drop, or update via Spring Boot’s spring.jpa.hibernate.ddl-auto property. For instance, to set hibernate.hbm2ddl.auto to create-drop we could add the following lines to application.yml:

spring:
  jpa:
    hibernate:
      ddl-auto: create-drop

This, however, is not ideal for production, as the database schema would be wiped clean and rebuilt from scratch any time the application was restarted. It may be tempting to set it to update, but even that isn’t recommended in production.

Alternatively, we could define the schema in schema.sql. This would work fine the first time, but every time we started the application thereafter, the initialization scripts would fail because the tables in question would already exist. This would force us to take special care in writing our initialization scripts to not attempt to repeat any work that has already been done.

A better choice is to use a database migration library. Database migration libraries work from a set of database scripts and keep careful track of the ones that have already been applied so that they won’t be applied more than once. By including the scripts within each deployment of the application, the database is able to evolve in concert with the application.

Spring Boot includes auto-configuration support for two popular database migration libraries:

All you need to do to use either of these database migration libraries with Spring Boot is to include them as dependencies in the build and write the scripts. Let’s see how they work, starting with Flyway.

Defining database migration with Flyway

Flyway is a very simple, open source database migration library that uses SQL for defining the migration scripts. The idea is that each script is given a version number, and Flyway will execute each of them in order to arrive at the desired state of the database. It also records the status of scripts it has executed so that it won’t run them again.

For the reading-list application, we’re starting with an empty database with no tables or data. Therefore, the script we’ll need to get started will need to create the Reader and Book tables, including any foreign-key constraints and initial data. Listing 8.2 shows the Flyway script we’ll need to go from an empty database to one that our application can use.

Listing 8.2. A database initialization script for Flyway

As you can see, the Flyway script is just SQL. What makes it work with Flyway is where it’s placed in the classpath and how it’s named. Flyway scripts follow a naming convention that includes the version number, as illustrated in figure 8.1.

Figure 8.1. Flyway scripts are named with their version number.

All Flyway scripts have names that start with a capital V which precedes the script’s version number. That’s followed by two underscores and a description of the script. Because this is the first script in the migration, it will be version 1. The description given can be flexible and is primarily to provide some understanding of the script’s purpose. Later, should we need to add a new table to the database or a new column to an existing table, we can create another script named with 2 in the version place.

Flyway scripts need to be placed in the path /db/migration relative to the application’s classpath root. Therefore, this script needs to be placed in src/main/resources/db/migration within the project.

You’ll also need to tell Hibernate to not attempt to create the tables by setting spring.jpa.hibernate.ddl-auto to none. The following lines in application.yml take care of that:

spring:
  jpa:
    hibernate:
      ddl-auto: none

All that’s left is to add Flyway as a dependency in the project build. Here’s the dependency that’s required for Gradle:

compile("org.flywaydb:flyway-core")

In a Maven build, the <dependency> is as follows:

<dependency>
  <groupId>org.flywayfb</groupId>
  <artifactId>flyway-core</artifactId>
</dependency>

When the application is deployed and running, Spring Boot will detect Flyway in the classpath and auto-configure the beans necessary to enable it. Flyway will step through any scripts in /db/migration and apply them if they haven’t already been applied. As each script is executed, an entry will be written to a table named schema_version. The next time the application starts, Flyway will see that those scripts have been recorded in schema_version and skip over them.

Defining database migration with Liquibase

Flyway is simple to use, especially with help from Spring Boot auto-configuration. But defining migration scripts with SQL is a two-edged sword. Although it’s easy and natural to work with SQL, you run the risk of defining a migration script that works with one database platform but not another.

Rather than be limited to platform-specific SQL, Liquibase supports several formats for writing migration scripts that are agnostic to the underlying platform. These include XML, YAML, and JSON. And, if you really want it, Liquibase also supports SQL scripts.

The first step to using Liquibase with Spring Boot is to add it as a dependency in your build. The Gradle dependency is as follows:

compile("org.liquibase:liquibase-core")

For Maven, here’s the <dependency> you’ll need:

<dependency>
  <groupId>org.liquibase</groupId>
  <artifactId>liquibase-core</artifactId>
</dependency>

Spring Boot auto-configuration takes it from there, wiring up the beans that support Liquibase. By default, those beans are wired to look for all of the migration scripts in a single file named db.changelog-master.yaml in /db/changelog (relative to the classpath root). The migration script in listing 8.3 includes instructions to initialize the database for the reading-list application.

Listing 8.3. A Liquibase script for initializing the reading-list database

As you can see, the YAML format is a bit more verbose than the equivalent Flyway SQL script. But it’s still fairly clear as to its purpose and it isn’t coupled to any specific database platform.

Unlike Flyway, which has multiple scripts, one for each change set, Liquibase changesets are all collected in the same file. Note the id property on the line following the changeset command. Future changes to the database can be included by adding a new changeset with a different id. Also note that the id property isn’t necessarily numeric and may contain any text you’d like.

When the application starts up, Liquibase will read the changeset instructions in db.changelog-master.yaml, compare them with what it may have previously written to the databaseChangeLog table, and apply any changesets that have not yet been applied.

Although the example given here is expressed in YAML format, you’re welcome to choose one of Liquibase’s other supported formats, such as XML or JSON. Simply set the liquibase.change-log property (in application.properties or application.yml) to reflect the file you want Liquibase to load. For example, to use an XML changeset file, set liquibase.change-log like this:

liquibase:
  change-log: classpath:/db/changelog/db.changelog-master.xml

Spring Boot auto-configuration makes both Liquibase and Flyway a piece of cake to work with. But there’s a lot more to what each of these database migration libraries can do beyond what we’ve seen here. I encourage you to refer to each project’s documentation for more details.

We’ve seen how building Spring Boot applications for deployment into a conventional Java application server is largely a matter of creating a subclass of SpringBootServletInitializer and adjusting the build specification to produce a WAR file instead of a JAR file. But as we’ll see next, Spring Boot applications are even easier to build for the cloud.

8.3. Pushing to the cloud

Server hardware can be expensive to purchase and maintain. Properly scaling servers to handle heavy loads can be tricky and even prohibitive for some organizations. These days, deploying applications to the cloud is a compelling and cost-effective alternative to running your own data center.

There are several cloud choices available, but those that offer a platform as a service (PaaS) are among the most compelling. PaaS offers a ready-made application deployment platform with several add-on services (such as databases and message brokers) to bind to your applications. In addition, as your application requires additional horsepower, cloud platforms make it easy to scale up (or down) your application on the fly by adding and removing instances.

Now that we’ve deployed the reading-list application to a traditional application server, we’re going to try deploying it to the cloud. Specifically, we’re going to deploy our application to two of the most popular PaaS platforms available: Cloud Foundry and Heroku.

8.3.1. Deploying to Cloud Foundry

Cloud Foundry is a PaaS platform from Pivotal, the same company that sponsors the Spring Framework and the other libraries in the Spring platform. One of the most compelling things about Cloud Foundry is that it is both open source and has several commercial distributions, giving you the choice of how and where you use Cloud Foundry. It can even be run inside the firewall in a corporate datacenter, offering a private cloud.

For the reading-list application, we’re going to deploy to Pivotal Web Services (PWS), a public Cloud Foundry hosted by Pivotal at http://run.pivotal.io. If you want to work with PWS, you’ll need to sign up for an account. PWS offers a 60-day free trial and doesn’t even require you to give any credit card information during the trial.

Once you’ve signed up for PWS, you’ll need to download and install the cf command-line tool from https://console.run.pivotal.io/tools. You’ll use the cf tool to push applications to Cloud Foundry. But the first thing you’ll use it for is to log into your PWS account:

$ cf login -a https://api.run.pivotal.io
API endpoint: https://api.run.pivotal.io

Email> {your email}

Password> {your password}
Authenticating...
OK

Now we’re ready to take the reading-list application to the cloud. As it turns out, our reading-list project is already ready to be deployed to Cloud Foundry. All we need to do is use the cf push command:

$ cf push sbia-readinglist -p build/libs/readinglist.war

The first argument to cf push is the name given to the application in Cloud Foundry. Among other things, this name will be used as the subdomain that the application will be hosted at. In this case, the full URL for the application will be http://sbia-readinglist.cfapps.io. Therefore, it’s important that the name you give the application be unique so that it doesn’t collide with any other application deployed in Cloud Foundry (including those deployed by other Cloud Foundry users).

Because dreaming up a unique name may be tricky, the cf push command offers a --random-route option that will randomly produce a subdomain for you. Here’s how to push the reading-list application so that a random route is generated:

$ cf push sbia-readinglist -p build/libs/readinglist.war --random-route

When using --random-route, the application name is still required, but two randomly chosen words will be appended to it to produce the subdomain. (When I tried it, the resulting subdomain was sbia-readinglist-gastroenterological-stethoscope.)

Not just WAR files

Although we’re going to deploy the reading-list application as a WAR file, Cloud Foundry will be happy to deploy Spring Boot applications in any form they come in, including executable JAR files and even uncompiled Groovy scripts run via the Spring Boot CLI.

Assuming everything goes well, the application should be deployed and ready to handle requests. Supposing that the subdomain is sbia-readinglist, you can point your browser at http://sbia-readinglist.cfapps.io to see it in action. You should be prompted with the login page. As you’ll recall, the database migration script inserted a user named “craig” with a password of “password”. Use those to log into the application.

Go ahead and play around with the application and add a few books to the reading list. Everything should work. But something still isn’t quite right. If you were to restart the application (using the cf restart command) and then log back into the application, you’d see that your reading list is empty. Any book you’ve added before restarting the application will be gone.

The reason the data doesn’t survive an application restart is because we’re still using the embedded H2 database. You can verify this by requesting the Actuator’s /health endpoint, which will reply with something like this:

{
  "status": "UP",
  "diskSpace": {
    "status": "UP",
    "free": 834236510208,
    "threshold": 10485760
  },
  "db": {
    "status": "UP",
    "database": "H2",
    "hello": 1
  }
}

Notice the value of the db.database property. It confirms any suspicion we might have had that the database is an embedded H2 database. We need to fix that.

As it turns out, Cloud Foundry offers a few database options to choose from in the form of marketplace services, including MySQL and PostgreSQL. Because we already have the PostgreSQL JDBC driver in our project, we’ll use the PostgreSQL service from the marketplace, which is named “elephantsql”.

The elephantsql service comes with a handful of different plans to choose from, ranging from small development-sized databases to large industrial-strength production databases. You can get a list of all of the elephantsql plans with the cf marketplace command like this:

$ cf marketplace -s elephantsql
Getting service plan information for service elephantsql as [email protected]...
OK

service plan   description         free or paid
turtle         Tiny Turtle         free
panda          Pretty Panda        paid
hippo          Happy Hippo         paid
elephant       Enormous Elephant   paid

As you can see, the more serious production-sized database plans require payment. You’re welcome to choose one of those plans if you want, but for now I’ll assume that you’re choosing the free “turtle” plan.

To create an instance of the database service, you can use the cf create-service command, specifying the service name, the plan name, and an instance name:

$ cf create-service elephantsql turtle readinglistdb
Creating service readinglistdb in org habuma / space development as [email protected]...
OK

Once the service has been created, we’ll need to bind it to our application with the cf bind-service command:

$ cf bind-service sbia-readinglist readinglistdb

Binding a service to an application merely provides the application with details on how to connect to the service within an environment variable named VCAP_SERVICES. It doesn’t change the application to actually use that service.

We could rewrite the reading-list application to read VCAP_SERVICES and use the information it provides to access the database service, but that’s completely unnecessary. Instead, all we need to do is restage the application with the cf restage command:

$ cf restage sbia-readinglist

The cf restage command forces Cloud Foundry to redeploy the application and reevaluate the VCAP_SERVICES value. As it does, it will see that our application declares a DataSource bean in the Spring application context and replaces it with a DataSource that references the bound database service. As a consequence, our application will now be using the PostgreSQL service known as elephantsql rather than the embedded H2 database.

Go ahead and try it out now. Log into the application, add a few books to the reading list, and then restart the application. Your books should still be in your reading list after the restart. That’s because they were persisted to the bound database service rather than to an embedded H2 database. Once again, the Actuator’s /health endpoint will back up that claim:

{
  "status": "UP",
  "diskSpace": {
    "status": "UP",
    "free": 834331525120,
    "threshold": 10485760
  },
  "db": {
    "status": "UP",
    "database": "PostgreSQL",
    "hello": 1
  }
}

Cloud Foundry is a great PaaS for Spring Boot application deployment. Its association with the Spring projects affords some synergy between the two. But it’s not the only PaaS where Spring Boot applications can be deployed. Let’s see what needs to be done to deploy the reading-list application to Heroku, another popular PaaS platform.

8.3.2. Deploying to Heroku

Heroku takes a unique approach to application deployment. Rather than deploy a completely built deployment artifact, Heroku arranges a Git repository for your application and builds and deploys the application for you every time you push it to the repository.

If you’ve not already done so, you’ll want to initialize the project directory as a Git repository:

$ git init

This will enable the Heroku command-line tool to add the remote Heroku Git repository to the project automatically.

Now it’s time to set up the application in Heroku using the Heroku command-line tool’s apps:create command:

$ heroku apps:create sbia-readinglist

Here I’ve asked Heroku to name the application “sbia-readinglist”. This name will be used as the name of the Git repository as well as the subdomain of the application at herokuapps.com. You’ll want to be sure to pick a unique name, as there can’t be more than one application with the same name. Alternatively, you can leave off the name and Heroku will generate a unique name for you (such as “fierce-river-8120” or “serene-anchorage-6223”).

The apps:create command creates a remote Git repository at https://git.heroku.com/sbia-readinglist.git and adds a remote reference to the repository named “heroku” in the local project’s Git configuration. That will enable us to push our project into Heroku using the git command.

The project has been set up in Heroku, but we’re not quite ready to push it yet. Heroku asks that you provide a file named Procfile that tells Heroku how to run the application after it has been built. For our reading-list application, we need to tell Heroku to run the WAR file produced by the build as an executable JAR file using the java command.[1] Assuming that the application will be built with Gradle, the following one-line Procfile is what we’ll need:

1

The project we’re working with actually produces an executable WAR file, but as far as Heroku knows, it’s no different than an executable JAR file.

web: java -Dserver.port=$PORT -jar build/libs/readinglist.war

On the other hand, if you’re using Maven to build the project, then the path to the JAR file will be slightly different. Instead of referencing the executable WAR file in build/libs, Heroku will need to find it in the target directory, as shown in the following Procfile:

web: java -Dserver.port=$PORT -jar target/readinglist.war

In either case, you’ll also need to set the server.port property as shown so that the embedded Tomcat server starts up on the port that Heroku assigns to the application (provided by the $PORT variable).

We’re almost ready to push the application to Heroku, but there’s a small change required in the Gradle build specification. When Heroku tries to build our application, it will do so by executing a task named stage. Therefore, we’ll need to add a stage task to build.gradle:

task stage(dependsOn: ['build']) {
}

As you can see, this stage task doesn’t do much. But it does depend on the build task. Therefore, the build task will be triggered when Heroku tries to build the application with the stage task, and the resulting JAR will be ready to run in the build/libs directory.

You may also need to inform Heroku of the Java version we’re building the application with so that it runs the application with the appropriate version of Java. The easiest way to do that is to create a file named system.properties at the root of the project that sets a java.runtime.version property:

java.runtime.version=1.7

Now we’re ready to push the project into Heroku. As I said before, this is just a matter of pushing the code into the remote Git repository that Heroku set up for us:

$ git commit -am "Initial commit"
$ git push heroku master

After the code is pushed into Heroku, Heroku will build it using either Maven or Gradle (depending on which kind of build file it finds) and then run it using the instructions in Procfile. Once it’s ready, you should be able to try it out by pointing your browser at http://{app name}.herokuapp.com, where “{app name}” is the name given to the application when you used apps:create. For example, I named the application “sbia-readinglist” when I deployed it, so the application’s URL is http://sbia-readinglist.herokuapps.com.

Feel free to poke about in the application as much as you’d like. But then go take a look at the /health endpoint. The db.database property should tell you that the application is using the embedded H2 database. We should change that to use a PostgreSQL service instead.

We can create and bind to a PostgreSQL service using the Heroku command-line tool’s addons:add command like this:

$ heroku addons:add heroku-postgresql:hobby-dev

Here we’re asking for the addon service named heroku-postgresql, which is the PostgreSQL service offered by Heroku. We’re also asking for the hobby-dev plan for that service, which is the free plan.

Now the PostgreSQL service is created and bound to our application, and Heroku will automatically restart the application to ensure that binding. But even so, if we were to go look at the /health endpoint, we’d see that the application is still using the embedded H2 database. That’s because the auto-configuration for H2 is still in play, and there’s nothing to tell Spring Boot to use PostgreSQL instead.

One option is to set the spring.datasource.* properties like we did when deploying to an application server. The information we’d need can be found on the database service’s dashboard, which can be opened with the addons:open command:

$ heroku addons:open waking-carefully-3728

In this case, the name of the database instance is “waking-carefully-3728”. This command will open a dashboard page in your web browser that includes all of the necessary connection information, including the hostname, database name, and credentials—everything we’d need to set the spring.datasource.* properties.

But there’s an easier way. Rather than look up that information for ourselves and set those properties, why can’t Spring look them up for us? In fact, that’s what the Spring Cloud Connectors project does. It works with both Cloud Foundry and Heroku to look up any services bound to an application and automatically configure the application to use those services.

We just need to add Spring Cloud Connectors as a dependency in the build. For a Gradle build, add the following to build.gradle:

compile(
      "org.springframework.boot:spring-boot-starter-cloud-connectors")

If you’re using Maven, the following <dependency> will add Spring Cloud Connectors to the build:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-cloud-connectors</artifactId>
</dependency>

Spring Cloud Connectors will only work if the “cloud” profile is active. To activate the “cloud” profile in Heroku, use the config:set command:

$ heroku config:set SPRING_PROFILES_ACTIVE="cloud"

Now that the Spring Cloud Connectors dependency is in the build and the “cloud” profile is active, we’re ready to push the application again:

$ git commit -am "Add cloud connector"
$ git push heroku master

After the application starts up, sign in to the application and view the /health endpoint. It should indicate that the application is connected to a PostgreSQL database:

"db": {
  "status": "UP",
  "database": "PostgreSQL",
  "hello": 1
}

Now our application is deployed in the cloud, ready to take requests from the world!

8.4. Summary

There are several options for deploying Spring Boot applications, including traditional application servers and PaaS options in the cloud. In this chapter, we looked at a few of those options, deploying the reading-list application as a WAR file to Tomcat and in the cloud to both Cloud Foundry and Heroku.

Spring Boot applications are often given a build specification that produces an executable JAR file. But we’ve seen how to tweak the build and write a SpringBootServletInitializer implementation to produce a WAR file suitable for deployment to an application server.

We then took a first step toward deploying our application to Cloud Foundry. Cloud Foundry is flexible enough to accept Spring Boot applications in any form, including executable JAR files, traditional WAR files, or even raw Spring Boot CLI Groovy scripts. We also saw how Cloud Foundry is able to automatically swap out our embedded data source bean with one that references a database service bound to the application.

Finally we saw how although Heroku doesn’t offer automatic swapping of data source beans like Cloud Foundry, by adding the Spring Cloud Connectors library to our deployment we can achieve the same effect, enabling a bound database service instead of an embedded database.

Along the way, we also looked at how to enable database migration tools such as Flyway and Liquibase in Spring Boot. We used database migration to initialize our database on the first deployment and now are ready to evolve our database as needed on future deployments.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset