Chapter 8. Design-driven development

This chapter covers

  • Different development strategies
  • Road-testing your API
  • Planning development
  • Development sprints

With a schema model and a good sense of the use cases you want to support, you’re finally ready to go through the process of developing the API. The development of the API requires that you continue to leverage the work done in the previous chapters so that the API you create emerges as a solid product.

After reading this chapter, you’ll have an entire system for the development of your API, with several checkpoints throughout to help you make sure you’re still on track to building the API you’ve envisioned.

8.1. Development strategies for your API

Several kinds of code development strategies exist out in the wild today, each providing a new mind-set for creating success for your software. This applies equally to API development strategies as well. In fact, because APIs need to be flexible and adapt to the needs of customers, it’s even more important to pick the right kind of development strategy for your API. Long-term API development projects without checkpoints are likely to drift from the needs of the customers rather than react in a meaningful way to new use cases or requirements.

Note that these approaches aren’t identical in scope. They’re presented here less as hard-and-fast rule sets, and more as a buffet of options, where you can pick the pieces that make the most sense for your needs. A blended model is often a great choice, as long as you identify the outcomes that you’re most concerned with.

8.1.1. Waterfall development

Before the emergence of the newer models, software development was generally done by writing a complete functional specification (usually focused on functionality rather than usability), and each piece was built—hopefully with some unit tests, followed by human testing of the specified functionality. When the entire monolith was ready to be deployed, it was released to the world. This process is known in the industry as the waterfall model (figure 8.1). With the waterfall method, any given project or product can take months or even years to see the light of day. This type of project management doesn’t focus on iterative improvement or checkpoints or dividing the work into functional chunks. Few opportunities arise for asking whether the project is on the right track or for adjusting the vision during the process.

Figure 8.1. In the waterfall development methodology, each step in the process is completely finished before the next section begins. The entire cycle from start to finish can take months, or even years, without any built-in review or reflection in the system. This doesn’t mean that waterfall development never has this kind of adjustment, but it isn’t inherent in the system.

There are a few downsides to this approach. The tests are written after the code, which makes them much more likely to miss edge cases or validate functionality that isn’t right. Writing a test after you write the code generally turns into an examination of the code, figuring out what the hooks are into the code, and making sure that it behaves exactly as it already does. This is okay for future regression testing (did I break something that used to work?) but not good for identifying places where the engineer hasn’t implemented the functionality properly. The lack of intermediate customer checkpoints to validate the implementation of specific modules or features almost invariably leads to required changes at the end of the road. At the present time, there are several other development methodologies that address the downsides of this more historical approach.

8.1.2. Agile/test-first development

The first leap forward in development strategies was the introduction of test-first development, which became popular during the 1990s when the industry experienced the emergence of agile as a new overall product methodology (see figure 8.2).

Figure 8.2. The agile scrum board looks fairly arbitrary and temporary by design. A sticky note or small pinned paper can easily be moved around, from the Stories waiting for attention, into the To Do column, through the additional columns, to Done. There’s nothing keeping a task from moving backward when the situation demands it, from Testing back to In Progress, because something didn’t work correctly, or from To Do back to Stories when other priorities emerge. This agility is at the base of the “agile” project management method.

One common method of implementing agile development integrates scrum and kanban:

  • The term scrum refers to the agile methods of sprints, including planning, task assignment, daily standups, and review/retrospective.
  • Kanban is a complementary idea, which, in an agile context, describes how to place those tasks on a board (frequently with sticky notes) for moving through various stages of completeness for tasks: Backlog, To Do, In Progress, Verification, and Done.
  • At the beginning of any development, the team and customers create user stories, which are use cases. A fully fleshed-out story would be “As a user, I want to be able to list my contacts, so I can find my friends,” and would ideally contain hard requirements for what Done means. These requirements are termed acceptance criteria.
  • As a scrum practice the team has daily standups, designed to be 15 minutes or less, to facilitate collaboration throughout the team.
  • Using scrum, development is done in sprints. A sprint is a relatively short period of time—sometimes a week or two, but no more than a month. When kanban is implemented, the team uses a kanban board with tasks, stories, and swim lanes, designed to help visualize the productivity of the team (figure 8.3).
    Figure 8.3. When tests are written before the code is written, the expected behavior is expressed before an implementation can emerge, and the resulting code is much more likely to behave as expected. Tests are a great way for a team to communicate what they expect the functions to do within the system. For each test, only enough code is written to make the test pass, and then the entire test suite is run to make sure that everything still works as expected. This is repeated until all the tests succeed and the programming work for the project is done.

If you’re interested in the agile methodology, you can find many books and websites devoted to the topic, including http://agilemethodology.org, which points to various other resources. The concepts of agile are relatively easy to understand at a high level, but they can be challenging to implement. I recommend you do some research if you’re interested in the topic, so you can hit the ground running with the process with minimal hiccups.

8.1.3. Behavior-driven development

Behavior-driven development (BDD) is an extension of test-driven development (TDD). In BDD (figure 8.4), additional software tests represent the acceptance criteria, so the developer now has two sets of tests to work against, and the overall use case is much clearer for any given subset of the code. For each acceptance test, there must be a workflow, which is tested based on the story itself.

Figure 8.4. The BDD cycle incorporates TDD into a larger cycle of behavior-driven, or use case–driven, tests. These integration tests express what the system as a whole needs to do, which helps keep holes from developing between individual units of the product.

With BDD, the overarching focus is on making the acceptance criteria real and helping developers to keep track of the main goal for the software in question.

8.1.4. Design-driven development

Design-driven development (DDD) adds a design layer on top of the test-first development and BDD stack for the goal of designing consistent interfaces, whether from other software sections or for the users’ use. The design layer is covered by the process outlined in this book. After defining the use cases, the schema model (in the case of a web API) is outlined, which makes it easy to create test cases based on the design goals. In fact, many open source testing frameworks can import a schema model and create appropriate tests for the software.

To learn more about these testing frameworks, take a look at the following post by Kin Lane, “the API Evangelist,” covering automatic client code generators. The post can be found at http://apievangelist.com/2015/06/06/comparison-of-automaticcode-generation-tools-for-swagger/. Here’s the list of client code generators for OpenAPI:

  • Swagger.io is the official host for the open source Swagger CodeGen project.
  • REST United uses a customized version of the Swagger CodeGen project and performs better than the official branch.
  • Restlet Studio uses Swagger CodeGen for Objective-C but has its own CodeGen engine for Android and Java.
  • APIMATIC has its own CodeGen engine for all languages.

A client designed to hit each of the endpoints as specified in the schema can perform valuable tests to make sure that your design is being implemented exactly as described in the schema model. Unfortunately, the current client code generators still need some work in creating foolproof code that compiles each time. When you’re using the generators, regardless of the language you’re using, it’s best to expect that you’ll need to spend a small amount of time getting it ready for prime-time.

Using these code generators is one advantage to design-driven development. The overarching goal of this type of project management is to make sure that the design of your API happens first and is the driving vision used to guide development. Like test-first development and behavior-driven development, design-first development carries a mind-set where the code is created to match a specific, defined specification—in this case, the schema model.

8.1.5. Code-first development

For completeness, I’m going to give you an idea what code-first development looks like. Note that many APIs that have already been created have been done in this way—developers are told to create a (single) API or set of endpoints, and they add code on top of the back-end system to meet this deliverable.

The problem with this approach should be obvious—because the API isn’t considered to be a complete, first-class citizen in the product ecosystem for your organization, it’s developed backward. No specification is needed to create an API, and the technical challenge is minimal. But when APIs are created in this way, they tend to be inconsistent with the main product and among the endpoints, and frequently the code needs to be rewritten later to meet the goals of the customers. Worse yet, a poorly planned API rollout means that there is an API that customers are using that can’t easily be deprecated in the future.

A web API isn’t a subproduct for the main product, and the action of adding new functionality, endpoints, and features requires the same amount of planning as teams commit to planning other software projects.

8.1.6. Why does project management matter?

Project management is a critical aspect to the success of any software engineering endeavor—indeed, for any product, whether software or something more tangible. In 2009, a software statistical company called the Standish Group, found at www.standishgroup.com/news/index, did a study on software project outcomes in the United States. This was before the upsurge in popularity for agile development models. In this study, 24% of the projects failed outright, and 44% were challenged, either falling behind or meeting unexpected challenges. Thirty-two percent of the projects succeeded. Although there was no direct mapping of project methodologies, this kind of outcome exposes the issues we were having with project management at the time. The waterfall method created too many opportunities for failure, and without regular iterations and check-ins, those failures could snowball into something untenable before the issue was discovered.

When developing APIs, you must be able to meet new requirements, work with your clients during development, and verify that development is creating the right APIs. When you’re creating a system for which you own the entire stack, some assumptions can be made about what the system will do. But a web API is an interface to your system for different organizations to use, not merely your own, and as such it’s subject to many different opinions and requirements. For this reason, the project management methodology you choose is critically important.

8.2. Project management for APIs

To achieve the best possible result with the process outlined in this book, you’ll find a design-driven methodology makes the most sense. Using agile methodologies where they work for you is a fantastic idea, but it’s not required. In the design-driven methodology (figure 8.5), the steps are in a slightly different order than with the waterfall method.

Figure 8.5. Ideally, API project management includes the development of a functional specification in parallel with the schema model. Only once these two documents have been completed should the acceptance criteria be written and unit tests developed. After all of this infrastructure is in place, the development iterations can begin. Although this seems like a huge amount of work up front, it reduces the amount of development time significantly—and more importantly, it reduces the chances of incorrect work that would need to be redone.

First, you’ll create your functional specification document. In parallel or shortly after, the schema model is created with use cases. Before developing, you create acceptance criteria for developers to work against along with the unit tests. Only then do you start with development. Instead of developing the entire system at once, you can parallelize and have different engineers working on different use cases so that they can deploy the API. In this section, I break out each of these actions so you can see how they flow together into a strong deployment process.

8.2.1. Functional specification

We haven’t discussed the functional specification explicitly yet, but your organization likely has a functional specification standard for software projects. If you don’t, your product managers need to create a document that, at the least, answers the following questions:

  1. What problem is the project solving?
  2. What is the business value?
  3. What are the metrics and use cases?
  4. What resources are needed or available?
  5. What does “done” look like?
  6. What could go wrong?

Obviously, a more complete functional specification will be a more powerful design and vision document, but a basic document covering these points will help the developers and other stakeholders understand the goals of the project. This may seem obvious, but so many people have created their APIs as side-coding features rather than unique products that this part of the process—the functional specification—is frequently skipped entirely.

Most of these factors probably look similar to the concepts described previously in the book, so I won’t belabor the points other than pointing out that this is the right place and time to figure out those factors. I’ll cover the others in more detail so they make sense in context.

First, what problem is the project solving? In plain English, describe exactly what’s happening that needs to be addressed. Keeping this information with the functional specification helps to avoid the “drift” that tends to happen once the project is under way and folks start focusing on implementation details. Help to make sure that every time the functional specification is referenced, the overriding goal of the project is front and center.

What does “done” look like? This question is quite important: What things will be true when the work is done? In development projects far and wide, there are situations where developers don’t understand what’s required in order to be finished with a project, and the features creep in during development. This is bad for the productivity of your development resources, and it also makes it difficult to follow strong coding practices while creating the product. If there’s a moving target, people are more likely to patch in changes rather than do the feature holistically from the beginning. Make sure that the acceptance criteria—the things that are necessary for the product to complete—are described completely and correctly, and that all stakeholders on both sides (product team and customers) are in agreement about what those criteria are.

What could go wrong? This is one concept that’s frequently missed in feature specifications, but one that I strongly suggest you adopt. You won’t be able to identify every possible challenge that the project may encounter, but if you’re relying on another team for some functionality, then you have a requirement that’s outside your control. Identifying this on the functional specification helps in two ways: it shines a light on this requirement, and it also gives you the ability to include that team in communication about the needed functionality, when you need it, and why it’s important to your project.

Every time you’re planning a project, it’s tempting to bound the specification as closely as possible to the ideal path, without taking the time to consider what issues could delay or sidetrack the project. The real world is messier than this. As you find new challenges, or meet new goals—or as your customers add new features to the requirements, as they’ll almost certainly do—update that functional specification so that it remains the one true narrative description of what the finished project will do.

Take caution when making functional specifications that are focused too heavily on architecture. The architecture of an API is less interesting than the developer experience expressed by the interaction model, resource schema, and workflows that are made possible via the project.

8.2.2. Schema model

I’ve talked a great deal about how to create a schema model. To sum up the information from chapter 7, a schema model helps your team ensure that the API is consistent across all the endpoints and can be used as an artifact or design document to spark discussion with internal and external team members before the coding is done. A schema model, like a functional specification, should be kept up to date if changes occur to the implementation, requirements, or acceptance tests.

Which format you choose for your schema modeling is up to you. All the versions I’ve covered, along with any new upcoming modeling languages, will work for any API. The schema is for your benefit, so whichever of the languages works best for you is the right one to use.

Once you’ve created both the schema model and the functional specification, it should be quite easy for anyone reviewing those documents to understand the purpose, goals, and plan for the project itself.

8.3. Road-testing your API

Before you kick off development, it’s a great idea to create a mock server using the schema model to help you validate your plan with your customers, whether internal or external. Whether you review the server with your customers before you start coding or in parallel with the development activity, having the opportunity to have those conversations to validate your model and functional specification with your customers can save you a lot of time and resources as you avoid direction changes nearer to completion. Depending on the development methodology you choose, you may have checkpoints throughout the development cycle where the direction can be tweaked or the vision revisited. Don’t miss out on this valuable chance to gain insight on the validity of your API plan. If changes need to occur, those changes should be prioritized in the context of other requirements for the project. The customer can then determine whether the change is needed in the context of the project as a whole.

8.3.1. Creating a mock server API

Creating a mock server is easy with any of the schema modeling languages. Each of the companies that owns or maintains a particular schema modeling system provides open source tools designed to help create a mock server. If your enterprise IT department doesn’t support external servers, you can always use one of the lightweight cloud hosting providers, such as Heroku or DigitalOcean, to bring up a mock server that your customers can access from outside of your network. After discussing the basic principles of creating and working with mock servers, I’ll add an exercise for folks who want to set up a mock server visible in the cloud.

The easiest place to observe this difference is on the main irresistibleapis.com website. The “live” version of the demo is the interface you worked with in chapter 2. It’s a basic API that doesn’t have the pizza functionality, only toppings.

The link to that original view is at http://irresistibleapis.com/demo. The API is running right underneath the functionality, and you can still add new toppings, remove them, and rename them, all using the fundamental API I described first.

OpenAPI makes it relatively easy to create a mock server. I’ve implemented it on the same server at http://irresistibleapis.com:8080/docs. This is the documentation section of the mock server. As you can see, it appears that all the endpoints are working correctly, and they are, insofar as you’re expecting the canned response from the mock server. Before I give you the tools you need to get this mock server running on your own, I’ll show you how to tell the difference between the two. Figure 8.6 shows the /toppings endpoint within the extra context of Chrome Developer Tools.

Figure 8.6. The original toppings from chapter 2. As you probably recall, the page itself is making a call back to the /toppings endpoint and processing the returned JSON to create this user interface. To see this in practice, use the Chrome browser and watch the network traffic to see how the JSON relates to the resulting HTML page.

As you work with the page demonstrated here, changes you make will change things within the system. This is a live API and changes to the data are kept. This is a direct HTTP call to the toppings API:

http://irresistibleapis.com/api/v1.0/toppings

If you work with the live server interface—the “All Toppings Active API” page in figure 8.7—changes will happen to the toppings as you create them, rename them, and delete them.

Figure 8.7. This live server response represents what you’ll see in the Chrome Network tab when the initial user interface page in figure 8.10 is generated. The CSS and HTML pages are used to create the single-page application based on the back-end API response.

With a mock server you’ll get a similar API interface to the system, but your interfaces with the system won’t change values within the system. The mock server is designed to help you look at how the endpoints will work such that you can create sample clients to test against the eventual live API. Figure 8.8 shows how this mock server behaves, similarly to a live call to the API itself.

Figure 8.8. This looks exactly the same as the live response, but it’s the OpenAPI mock server responding with the information from the schema model. Responses from a mock server won’t change (so if you delete a topping it will still appear). Nonetheless, these servers are an excellent way to create a prototype or demo application to test and make sure the schema will support your use cases.

A mock server is a service that contains static data and provides endpoints for you to work with. A mock service can be challenging to understand compared to a simple API, but having one helps a great deal when validating the model you’ve created. Using OpenAPI gives a couple of pieces of functionality for exploring the Mock API. First, you can get to the /api-docs endpoint, which will show the JSON version of the schema model (see figure 8.9).

Figure 8.9. In this case, the mock server is running on port 8080 on the localhost, as you can see from the host entry. The schema model here is the same one created in chapter 7 and should in fact meet all the use cases identified in that chapter.

Besides this simple view into the schema model, there’s an excellent way to browse through the Mock API. On the same server you’ll find http://irresistibleapis.com:8080/docs, an interactive method for exploring the API endpoints represented on the system.

Although discussing the API with your customers before bringing the development resources online (or while they’re working) is a great idea, you also get another fantastic advantage to having a mock server. Armed with your use cases, you can create all the needed workflows to support those stories and write client code that implements each of the use cases to ensure that the use cases you started with will be easy and straightforward (see figure 8.10).

Figure 8.10. When running a mock server using the OpenAPI tools, you also get an interactive console to explore the endpoints and see how they work. This functionality is also available for the live server when it’s created. When you’re sharing the schema model with other developers or customers, sometimes it’s helpful to give them a visual way to explore the platform.

As a quick exercise, go ahead and click GET for the /toppings/ resource. Note what the values are; they’ll always be the same in this mock server. Figure 8.11 shows exactly what the mock server will return.

Figure 8.11. This call to http://irresistibleapis.com:8080/v1/toppings/ is accessing the mock server and is returning exactly what the schema model indicates. Again, if you make a call to add a new topping, update a topping, or delete one, this call won’t change, because there’s no live service on the back end but only a static model.

You can see this exact call in a much nicer format at http://irresistibleapis.com:8080/docs/#!/Default/toppingsGet. Figure 8.12 demonstrates this view.

Figure 8.12. Along with the interactive console for exploring the APIs, there’s also interactive documentation created by the OpenAPI system. This is the same documentation you’ll be able to provide to your users when the API itself is live—which is one of the reasons to make sure that the API itself stays in sync with the schema model.

This interactive documentation browser allows you to see quite clearly how the HTTP interactions work with the system. Although write interactions won’t change values in the system, understanding how the requests and responses work is quite powerful. Moving down the page to the POST for toppings, you can get a better idea of what I mean about the mock server. POSTing a new topping or DELETE-ing a topping will have no effect on the values the GET responses provide.

Advanced

The previous section certainly demonstrates the differences between a live API server and a mocked-up one. But I’m going to take a little time here to walk through the process of getting your own mock server visible from the public cloud, so you can use it to work with your customers and hone your eventual API into perfection.

Hosting provider: Heroku

I’ve chosen Heroku as the best server for this strategy because the interaction with the system is quite simple, and you get some bonus adorable haiku names for your services. The code you need to work with is already in the code you’ve pulled down before, but I’ll start the process anew to make sure everyone has the same experience (see figure 8.13).

  1. Get a Heroku login. The most basic level of use, which is what you’ll be using, is free.
  2. Install the Heroku toolbelt from https://toolbelt.heroku.com.
  3. At the command line, type your Heroku login followed by this:
    git clone http://github.com/synedra/swagger_irresistible_mock
    cd swagger_irresistible_mock
    heroku create
    git push heroku master
Figure 8.13. The process for pushing your model and server to Heroku is relatively straightforward, and it gets easier after the first time. All you’ll need to do is edit the files and push them back out to Heroku. In this case, the server infinite-basin-6068.herokuapp.com was created, and now the documentation and mock server are available on the internet to share with other people.

The Procfile at the base of the repository clone tells the Heroku server what to do when the code is pushed up:

web: node index.js

At this point, you’ll have a new service, available from anywhere, where you can demonstrate the API via the mocked-up version. Whether you run this locally or separately, it will be an excellent tool for your use in moving forward with development. Heroku provides you with a haiku-server-name, which is unique for your use. The process in the command line works fairly simply.

In this example, if you want to view the interactive documentation, do so by accessing https://infinite-basin-6068.herokuapp.com/docs/.

8.3.2. Acceptance tests and use cases

When discussing testing, many developers consider unit tests—tests that determine a specific module or code set is working as written. Unit tests are important to ensure that the cogs in the wheel are working correctly. But unit tests don’t cover all the testing that you need to do for your API. Acceptance criteria are critical to verify that you’re making the use cases as easy as designed and not getting off track.

I mentioned acceptance criteria earlier, and now I’ll walk through the process of creating acceptance criteria, what the goals are, and how to implement them. I’ll use the standard agile methodology in building up these tests.

A user story in agile methodology is a description of one of the things that needs to be enabled by the project. These stories are generally created to follow this template:

  • As a <type of user>
  • . . . I want to be able to <perform an action>
  • . . . so that I can <create an outcome>.

Figure 8.14 shows an example of a user story for the pizza toppings API.

Figure 8.14. The standard agile storyboard format is easier to understand with an example. It can often feel forced, but having all three sections of the story makes it much easier to ensure that the requirement is well defined, and there’s a shared understanding of what needs to happen in order for the story to be done.

Once the user story is created, you can turn it into a testing scenario—the second step on the way to creating a good code test to make sure you’ve got the right behavior. The difference between the user story and the testing scenario is that the latter will describe specific actions and their outcomes, to solidify exactly what’s needed to validate that the code covers the user story.

In this case, there are a couple of pieces of information you’ll want to include. First, a new topping needs to be added to the system, to make it available to be subsequently added to a pizza. The steps are listed, including the action taken by the user and the subsequent test to make sure that it happened correctly.

Scenario: Add pineapple to the system

  • Given the pizza topping doesn’t exist
  • And I add a new topping to the system
  • The list of toppings should include the new topping

This is a different way to express the goals from earlier, in a measurable and precise manner. Some user stories might end up with multiple scenarios. Don’t feel constrained to create one scenario per user story, but each one should have at least a single scenario.

From here, you can create an acceptance test case. An acceptance test case is different from a unit test case in that the behavior and workflow of the user case is tested, rather than the exact behavior of specific functions within the code. This is not to say that there’s no place for unit tests, but as you can see from the workflow listed in figure 8.14, the acceptance tests that we’re creating live outside of the realm of the unit tests. They’re an overarching test to make sure that the set of functionality that was created—which should also meet the goals of the unit tests—combine to meet the needs of the user story being considered here.

The acceptance test will look similar to the following scenario:

Acceptance test case: Add pineapple to the system

  • Get a list of pizza toppings in the system
  • Verify that the new topping doesn’t exist
  • Add a new topping to the system
  • Get the list of pizza toppings
  • Verify that the new topping has been added

I realize this seems simplistic, but remember that tests aren’t supposed to be complicated or difficult; they need to verify that the use case you’ve selected is easy, straightforward, and functional when the user tries to do it. It may be that you want to verify that the previous case works when the topping exists in the system already, so you’d create an acceptance test case similar to the case where there’s a specific topping (in this case pineapple) already in your toppings list:

Acceptance test case: Add pineapple to the pizza

  • Get a list of pizza toppings in the system
  • Verify that the new topping does add a new topping to the system
  • Verify that an error is returned with a developer-friendly message

The goal of the acceptance tests is to follow the workflow that developers will use to interact with your system, to make it simple for them to be able to do the things you expect and want them to do. When you have these acceptance tests, you can use them as the basis of tutorials you can include in your documentation to help guide developers through the workflow as you expect it to happen. This helps reduce frustration in your customer developers, but even more, it helps you to avoid the case where your developers try to find other ways to create the same result. It’s best to do everything you can to keep your customer developers on the same page as you are when they’re implementing the use cases you’ve identified, and the easiest way to do that is to make sure that you’ve communicated clearly how you expect specific actions to be implemented.

8.4. Planning development

Once you’ve decided on your development model, you can move on to planning how your development will proceed. For this book, I’ll encourage you to use a behavior-driven development model, enhanced to add design-driven development. For API development, where third-party and external users will be interacting with your system, it’s critical that you do everything you can to make sure that you’ve covered the use cases you want to support, and that the interface is as intuitive and complete as possible. This being the case, you likely want to break down the user stories into what you need for a minimum viable product and work from there.

For the system I’m describing here, it may be that you want to support a few use cases to start. Perhaps you only want to have full functionality for the toppings, similar to the beginning API described in chapter 2. This could be your initial release, and that would be fine. Different endpoints or sets of endpoints can be released at different times, and as long as you have an overarching design for your API, it’s reasonable to release different parts of the system.

Here’s the list of use cases described in chapter 7 while I was building all the schema models:

  • Creating new toppings
  • Getting a list of all toppings, including matching a particular string
  • Deleting and updating specific toppings
  • Creating a named pizza—only one per user
  • Listing pizzas
  • Adding toppings to a pizza
  • Viewing the pizza
  • Deleting the pizza for the specific user

For development planning, perhaps the toppings functionality is sufficient for the first release. You may not want to release it externally, but for this case perhaps you want to front-load the list of toppings so that when the pizza functionality is available there will be lots of toppings to choose from. Similarly, you want to make sure that the functionality works well for your internal and external customers. If you create a subset of your minimum viable product as a subset of endpoints, you can always allow some customers to interact with it and make sure that the interaction methodology that you’ve selected is easy for them to use and the API works well from their point of view. Targeting a subset of endpoints also makes it possible for you to split the team into two subteams, one working on each set of endpoints. Knowing that you have a schema model can give you confidence that the two teams won’t diverge while creating the API. This may not seem super critical for an API that’s as simple as this one, but imagine much more complicated APIs and you’ll start to see the advantages. For this API you may want to develop in serial—toppings, then pizzas—and release them as a single customer

Think of Twitter, for example. Twitter has APIs for user information, for a user’s feed and lists, searches, and live streams. It adds new functionality regularly based on what users are looking for, making it possible for their own products and third-party clients to keep up with customer demand. It’d be quite difficult for them to keep up with the various use cases in a waterfall manner, with a monolith—they need to be agile and adapt to the social media requirements, so they can break down releases into subsets and create and integrate the new functionality as soon as it’s ready. The only time they’ve needed to make a large and overarching release is when they changed the underlying system for their APIs—and this will be true for nearly any API vendor. If it turns out you need to streamline the overall system, you may have to make a large release and it may frustrate your developers. Keep up that constant communication and work with them to make it as easy as possible.

8.5. Development sprints

Returning to the scrum section of the agile project management philosophy, the development is done in sprints. Each sprint can be whatever length makes sense for your team. Common sprint lengths are one to two weeks. These time lengths are ideal for many development goals because a week or two is long enough to address several user stories and come back together at the end to make sure that the overall development is on track.

Planning

The first task during an agile sprint is the planning session (see figure 8.15).

Figure 8.15. A scrum planning board generally has the same columns: a Story column for backlogged stories and tasks in the To Do column. As tasks get assigned or selected by team members, they’re put into the In Process state. Once they’re ready, they’re Verified by a different team member, and finally they’re moved to Done.

During this time, most teams work with a board like figure 8.15—the scrum task planning board. The stories are selected from the left-hand (backlog) column and are placed into the swim lanes (the horizontal rows across the board). Each swim lane may be a particular type of task (development, documentation, or QA), and each will start in the To Do section of the board to start the sprint.

Determining how many of the tasks will make it into the sprint is done by an estimation system. With agile development, I’ve seen many different ways to do estimation, but most commonly I’ve seen teams choose to work with increments of half a day. It might take three days to write the listing function for the toppings, whereas implementing the delete functionality may only take a half day or so. One of the things I admire about most agile teams is that the process of estimation is much more reasonable than what happens with waterfall planning, because things are viewed in a manageable timeframe and it’s easier to see where expectations are unreasonable. Engineers are expected to be doing other things throughout the week, and the amount of time they have for active development is drawn from what’s left (for instance, an engineer might have meetings to attend, support tasks to attend to, or a large volume of email to respond to). The goal of the estimates is not to “race” or force people to extend themselves to achieve heroic ends; the goal of the estimates is to honestly determine what things can happen in the correct amount of time so that the team can meet deadlines appropriately and be aware when a delivery has been overpromised.

One other strategy that can be quite helpful when estimating is to aim to make sure that no task is more than a day long. If a task is longer than one day, it can likely be broken down into smaller subtasks, which may allow for more parallelization, or at least quicker movement into the verification stage.

Standups

As the development progresses through the sprint, the team meets, ideally daily, in a short meeting called a stand-up. You may have heard about stand-ups in the past, and in fact they’re one of the things many people truly hate about the agile process, so I’m going to go back to the original intent of these meetings. First of all, the reason it’s called a stand-up is that everyone, whether the team is local or remote, should be physically standing up during the meeting. This helps retain the sense of a quick session to touch base on the project. The goal of the stand-up is to verify that everyone is on track with his or her tasks and to make sure there are no blockers. It’s not an overall status meeting, and it should take no longer than 15 minutes, even for a large team. Any blockers that are identified should be taken to smaller meetings outside of the stand-up, and the entire process should be less interruptive to the developer’s day than a conversation around a water cooler. One way to understand the general communication is to consider 3 P’s: progress, plans, and problems. What did I do yesterday? What am I planning to do today? What might go wrong?

Kanban “Swim Lanes”

During the development of the product, each task, which has a clearly defined definition of Done, starts in the To Do column before it’s assigned to an engineer or pair of engineers, at which point it is In Process. Once the developer(s) believe that the functionality is complete, the task moves to To Verify, at which point, ideally, another engineer or engineers verifies that the functionality works as expected and the task moves into the Done column. Note that depending on the requirements your team has for completion, this may indicate that the code is done, the tests are done, and/or the documentation is done. These requirements need to be clear and measurable, so that the handoff is clean and developers can move to the next task without having to come back to polish up their work. This is accomplished by having clear “definitions of done” at each step as well as for the final product. In the case of the behavior-driven development process I’m discussing, making sure that the acceptance tests pass, to the best of the developer’s ability, is part of making sure things are ready for testing.

The testing phase is generally done by a separate engineer, who should take the unit tests and acceptance tests and verify that the code, documentation, and tests as written are correct when compared to the original goals of the story in question.

As engineers are freed up throughout the sprint, they take new items from the To Do column and start working on them until the sprint finishes. If one of their completed tasks fails verification, the task moves back to To Do and an engineer (likely the same one) is responsible for fixing the issues and moving it back to the testing phase. In the extremely unlikely event that some extra time rears its head, the developer can bring it up during the stand-up so the team can pull in another story from the backlog.

Retrospective

The retrospective is one of the things that many engineering teams try to skip over. It’s uncomfortable thinking of something that feels like it might be a blame-fest. But the goal of the retrospective in an agile team is not to blame people for failures, but to identify areas for improvement in the process. Are your estimates less accurate than you’d hoped? Why? Are you getting held up by dependencies on other teams? How can you manage these issues better in the future? An ideal agile sprint would finish exactly the right amount of work in the expected time, but that almost never happens. The retrospective is one of the most valuable exercises in the agile toolkit, so it’s important to move past the discomfort and see it for what it is: an opportunity for the team as a whole to see where the planning and expectations matched—or didn’t—with the outcome of the sprint. Ideally your retrospective should take place before the planning session for the next sprint so you can see what needs adjusting and put it into practice right away, before it fades back out of your memory.

This is also the ideal time to discuss items that didn’t meet the customer’s expectations when they were released, and determine how to better express those expectations going forward. Don’t fear the retrospective; the goal is to reduce the anxiety and uncertainty of the team going forward, so that work doesn’t need to be redone and engineers are given the necessary amount of time to get their work done. Additionally, the retrospective is an opportunity for the team to congratulate each other for the work that was done well.

8.6. Summary

This chapter covered various elements of software development methodologies, with an eye toward making sure that your API is successful as soon as you’re ready to deploy it. The chapter explored the following topics:

  • Different project-management strategies include waterfall, agile, design-driven development, and behavior-driven development. Waterfall is the “old way” of doing development and can be generally understood as a system where you define an entire project and then move through the sections of the product without significant refactoring, pivoting, or other adjustments to your workflow. Agile is a newer method in which the work is broken down into smaller sprints, and the progress toward the higher-level project is reviewed and discussed regularly throughout the development phase. Both design-driven and behavior-driven development add to this agile process by adding use cases and strong tests to validate that the project is moving in the right direction to end up with the right product to meet the requirements.
  • Road-testing your API via a sprint review involves checking in with customers, shareholders, product managers, and client developers to make sure that the API being developed will meet the needs as described by the product’s specification and schema model.
  • Planning development sprints using use cases to determine which parts of your API should be created helps you design and deploy valuable sections of your API as quickly as possible.
  • With development sprints, when you focus on short iterations for your development execution, you can identify issues or make adjustments quickly, reducing the need for refactoring your code at a later time.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset