Pursue Throughput

A final source of waste isn’t immediately obvious. The manufacturing industry calls it inventory. In software development, it’s unreleased software. Either way, it’s partially done work—work that has cost money but has yet to deliver any value.

Partially done work represents unrealized investment. It’s waste in the form of opportunity cost, where the investment hasn’t yet produced value but you can’t use the resources it cost for anything else.

Partially done work also hurts throughput, which is the amount of time it takes for a new idea to become useful software. Low throughput introduces more waste. The longer it takes to develop an idea, the greater the likelihood that some change of plans will invalidate some of the partially done work.

To minimize partially done work and wasted effort, maximize your throughput. Find the step in your process that has the most work waiting to be done. That’s your constraint: the one part of your process that determines your overall throughput. In my experience, the constraint in software projects is often the developers. The rate at which they implement stories governs the amount of work everyone else can do.

To maximize throughput, the constraint needs to work at maximum productivity, whereas the other elements of your process don’t. To minimize partially finished work, nonconstraints should produce only enough work to keep the constraint busy, but not so much that there’s a big pile of outstanding work. Outstanding work means greater opportunity costs and more potential for lost productivity due to changes.

Minimizing partially done work means that everyone but the constraint will be working at less than maximum efficiency. That’s OK. Efficiency is expendable in other activities. In fact, it’s important that nonconstraints have extra time available so they can respond to any needs that the constraint might have, thus keeping the constraint maximally productive.

Note

These ideas come from The Theory of Constraints. For more information, see [Goldratt 1997], an excellent and readable introduction. For discussion specific to software development, see [Anderson 2003].

In Practice

XP planning focuses on throughput and minimizing work in progress. It’s central to the iteration structure. Every iteration takes an idea—a story—from concept to completion. Each story must be “done done” by the end of the iteration.

XP’s emphasis on programmer productivity—often at the cost of other team members’ productivity—is another example of this principle. Although having customers sit with the team full-time may not be the most efficient use of the customers’ time, it increases programmer productivity. If programmers are the constraint, as XP assumes, this increases the team’s overall throughput and productivity.

Beyond Practices

Our project faced a tight schedule, so we tried to speed things up by adding more people to the project. In the span of a month, we increased the team size from 7 programmers to 14 programmers, then to 18 programmers. Most of the new programmers were junior-level.

This is a mistake as old as software itself. Fred Brooks stated it as Brooks’ Law in 1975: “Adding manpower to a late software project makes it later” [Brooks] (p. 25).

In this particular project, management ignored our protestations about adding people, so we decided to give it our best effort. Rather than having everyone work at maximum efficiency, we focused on maximizing throughput.

We started by increasing the size of the initial development team—the Core Team—only slightly, adding just one person. The remaining six developers formed the SWAT Team. Their job was not to work on production software, but to remove roadblocks that hindered the core development team. Every few weeks, we swapped one or two people between the two teams to share knowledge.

This structure worked well for us. It was a legacy project, so there were a lot of hindrances blocking development. One of the first problems the SWAT Team handled was fixing the build script, which would often fail due to Windows registry or DLL locking issues. By fixing this, the SWAT Team enabled the Core Team to work more smoothly.

Later, we had to add four more inexperienced programmers. We had run out of space by this time. Lacking a better option, we put the new programmers in a vacant area two flights of stairs away. We continued to focus our efforts on maintaining throughput. Not wanting to spread our experienced developers thin, we kept them concentrated in the Core Team, and since the SWAT Team was working well, we decided to leave the new team out of the loop. We deliberately gave them noncritical assignments to keep them out of our hair.

Overall, this approach was a modest success. By focusing on throughput rather than individual efficiency, we were able to withstand the change. We more than doubled our team size, with mostly junior developers, without harming our productivity. Although our productivity didn’t go up, such a deluge of people would normally cause a team to come to a complete standstill. As it was, pursuing throughput allowed us to maintain our forward momentum. In better circumstances—fewer new developers, or developers with more experience—we could have actually increased our productivity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset