Tests and Code Coverage

In the past fifteen years, we’ve seen a tremendous growth in automated software testing. Elixir embraces this trend. Rather than give you a deep dive into any single tool, we’re going to walk you through a few important ones that will help ease your adoption.

ExUnit

Elixir ships with a unit testing framework called ExUnit.[17] Based on longstanding principles, it serves as the basic building block for almost all other Elixir testing frameworks.

The Elixir community expects applications and libraries to be well tested. We’ll not give you more than a brief overview here, but we will touch on some ExUnit basics:

  • Tests are a series of scripts that mix discovers and runs based on their name.

  • Each test runs a flow of setup, test, teardown.

  • After setup, a test executes some piece of application code and then makes one or more assertions about what should be true.

  • If an assertion is not true or there’s an unplanned application exception, the test fails.

ExUnit has a strong focus on usability. Every time an assertion fails, you get detailed reporting on what went wrong. Recent Elixir versions even show colored diffs in those reports, making it trivial to spot errors.

Most of your interactions with the test suite happen through mix test.[18] Because it integrates with ExUnit tags, mix test provides plenty of control of what to test on every invocation. For example, if you have tests that need to talk to external services, you may want to hide those behind an external tag and run those only when necessary with mix test --only external. We recommend checking out the other flags available in the mix test command. We use flags such as --stale and --cover on a daily basis.

The testing philosophy is equally important to the tooling. Elixir developers put a strong emphasis on concurrent tests. ExUnit lets you run a group of tests concurrently by simply passing the async: true option when defining your test cases. Frameworks such as Phoenix build on those capabilities, allowing you to run tests concurrently even when your application needs to talk to the database.[19]

Avoid Mocking Libraries

Another important testing philosophy is that the Elixir community prefers to avoid mocking libraries that dynamically change the code under test.[20] For example, if you need to communicate to external services, tools such as Bypass[21] let you to run a web API on the same VM as your tests. This API can be controlled by your tests through composed external responses. This way your tests fully exercise the code that integrates with the third-party service, from your business logic to the HTTP client. Bypass has been invaluable to test the integration with external systems at Bleacher Report and icanmakeitbetter.

On the other hand, if you really need to define a mock in your application, you can use Mox.[22] That library is an option that enforces explicit contracts in your code while still allowing tests to run concurrently.

As you get into Elixir, those practices and philosophies will become clearer through the documentation and the tooling. If you are using individual frameworks such as Nerves and Phoenix, those ecosystems will help point you in the right direction as well.

With that basic introduction out of the way, let’s move on to other useful testing tools. In the next section, we’ll show you how to capture a basic metric for test health: coverage.

Measuring Test Coverage with Excoveralls

In this chapter, we’ve focused not just on automation tools but measurement tools. One such measurement is test coverage. Code coverage doesn’t measure the quality of your tests. It measures how much of your system your tests execute. As your team adopts Elixir, it’s easy for code to creep into the codebase without tests. With a coverage tool, you can objectively measure how much of your code that’s exercised by at least one test. Just as importantly, you can use it to see whether any individual line of code in the system has supporting tests.

Test coverage is a good rubric by which to measure the overall health and stability of an application. With high test coverage, we can be confident when we refactor or add new features. It helps eliminate regression and other bugs. Ultimately, if you have meaningful tests, it’s a testament to the code doing what it says it does.

Excoveralls[23] is a library that measures test coverage, sending a report to the command line, to HTML, or to external services. For our purposes, let’s focus on the command-line and HTML output options.

To use Excoveralls add ex_coveralls to the deps function in mix.exs:

 defp​ deps ​do
  [
  {​:credo​, ​"​​~> 0.8.8"​, ​only:​ [​:dev​], ​runtime:​ false},
  {​:dialyxir​, ​"​​~> 0.5"​, ​only:​ [​:dev​], ​runtime:​ false},
  {​:excoveralls​, ​"​​~> 0.7.4"​, ​only:​ [​:test​], ​runtime:​ false},
  {​:ex_doc​, ​"​​~> 0.18"​, ​only:​ [​:dev​], ​runtime:​ false},
  {​:inch_ex​, ​"​​~> 0.5"​, ​only:​ [​:dev​, ​:test​], ​runtime:​ false}
  ]
 end

You also need to add the test_coverage tuple to the project function. The test_coverage configuration is a mechanism to configure the tool and options for how you want to test your application. The default is a wrapper around cover[24] that ships as part of Erlang/OTP.

Now, open up mix.exs and fill out the preferred_cli_env to use with coveralls. preferred_cli_env allows you to set the preferred environment to run command-line tasks:

 def​ project ​do
  [
 app:​ ​:belief_structure​,
 preferred_cli_env:​ [
 "​​coveralls"​: ​:test​,
 "​​coveralls.detail"​: ​:test​,
 "​​coveralls.post"​: ​:test​,
 "​​coveralls.html"​: ​:test​,
  ],
 test_coverage:​ [​tool:​ ExCoveralls],

Now just run mix coveralls:

 ----------------
 COV FILE LINES RELEVANT MISSED
  66.7% lib/belief_structure.ex 18 3 1
  0.0% lib/belief_structure/hexify.ex 11 2 2
 [TOTAL] 40.0%
 ----------------

If you run mix coveralls.detail you’ll get a command-line output of each file with the covered lines highlighted in green. Like mix coveralls.detail, mix coveralls.html outputs to cover/excoveralls.html.

These detailed reports can help to up test coverage or, from a reviewer’s point of view, make it easy to see how the code-to-be-committed fits in with the rest of the application and how well it’s tested.

As with InchEx, you can decide how much coverage you want to maintain. The icanmakeitbetter team maintains full 100% coverage, except for ignored files that work on external interfaces. At Bleacher Report, the team does not require full coverage, but does measure it. They choose to invest in code quality in other ways. You’ll need to figure out what makes sense for your team and stick with the approach that works best for you.

Bureaucrat

Many of the tools available directly in Elixir, such as ExUnit and documentation, focus on modules and functions. ExUnit is a great tool for unit testing. ExDoc is excellent for generating documentation from your modules and functions, with guides covering the remaining functionality.

However, as developers tackle particular domains, such as the domain of web applications with Phoenix, the need for more specific tools arises. So before finishing the testing section, we are going to cover two tools that are specific to web applications, exploring them in the context of Phoenix. If you are using Elixir for other domains, such as embedded software or data processing, it is likely those domains include their own abstractions, which provide similar benefits.

To get started, let’s take a look at a Phoenix controller test:

 test ​"​​GET /posts/:id "​, %{​conn:​ conn} ​do
  response =
  get(conn, ​"​​/posts/post-name"​)
  |> json_response(200)
 
  assert response.status == 200

It’s a standard controller test. Every framework has its quirks, and Phoenix is no different. You’ll pass in the %{conn: conn} map. This is just syntactic sugar for MyApp.ConnTest.build_conn() which sets up a test connection.

Recall that conn is the data that Phoenix needs to describe the whole life of a connection, from the initial attributes about the URL to intermediate data in an application and eventually to the response and status code. Since response is the output of the set of functions you can assert or refute anything that’s related to the request and response cycle. It makes integration tests easy to write and explain to new Elixir developers.

But documenting all of the endpoints and attributes available in our APIs is a constant struggle. Those of us who have coded more than a couple of decades remember excessive comments. Our teachers and mentors would request acres of comments at the top of each method. Over time, some have come to understand that comments can get out of sync with your codebase.

A similar issue occurs with API documentation. The problem is that no matter how vigilant one is in maintaining it, inconsistencies emerge. For someone who writes loads of API docs, it’s a time-consuming process, and errors undermine confidence.

Enter Bureaucrat,[25] a tool that attempts to solve this discrepancy problem. Bureaucrat is a library that generates API documentation from tests. If you have good API tests, then the API docs are always in sync.

Let’s try it out. By now, the steps for integrating a new tool should seem familiar. Add bureaucrat to the deps section in mix.exs and then run mix deps.get. You’ll also need to update test/test_helper.exs and modify it like so:

 Bureaucrat.start
 ExUnit.start(​formatters:​ [ExUnit.CLIFormatter, Bureaucrat.Formatter])

All this does is start Bureaucrat when you run your tests and adds the Bureaucrat.Formatter module to the list of formatters to run when ExUnit runs. Additionally, you need to modify test/support/conn_case.ex:

 defmodule​ MyApp.ConnCase ​do
  using ​do
 quote​ ​do
 # ... all of the other Phoenix imports omitted
 import​ Bureaucrat.Helpers
 end
 end
 end

And that’s it. All that remains is to tell which tests Bureaucrat should document. Bureaucrat makes it spectacularly easy to generate documentation from tests:

 test ​"​​creates and renders resource when data is valid"​, %{​conn:​ conn} ​do
  conn =
  conn
  |> post(​"​​/ratings"​, ​rating:​ @valid_attrs)
  |> doc
 
  assert json_response(conn, 201)[​"​​data"​][​"​​id"​]
  assert Repo.get_by(Rating, @valid_attrs)
 end

Then run DOC=1 mix test, and it generates your documentation, which should look something like this:

images/ensuring_code_consistency/bureaucrat_doc.png

By default, Bureaucrat outputs documentation to web/controllers/README.md but you may also output all documentation to a custom directory like this:

 Bureaucrat.start(
 writer:​ Bureaucrat.MarkdownWriter,
 default_path:​ ​"​​doc/APIDOCS.md"​,
 paths:​ [],
 env_var:​ ​"​​EXPORT"
 )

Creating accurate API docs on the fly is invaluable because now as long as there is sufficient test coverage, the relevant documentation is always in sync.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset