Quality Assurance

You might think talking about QA this early in the book is putting the cart before the horse. Well, you're wrong. Quality assurance must be a thread that runs through your application from requirements capture, definition, and design, to implementation, and on to deployment. The standard, at least for Web application development, tends to be “design, develop, test, deploy.” You'll notice something peculiar about this methodology: If problems are found in testing (and they always are), no step to fix them is included before the deployment phase. No wonder so much bad code is out there in the world! As humans venture into the universe, we're taking this poorly QAed code with us! A prime example of missing QA was the much-publicized loss of the NASA Mars Climate Orbiter spacecraft in 1999. It spun out of control because part of the software assumed English units and another part assumed metric units. It's inconceivable that these code modules were never tested together, but there you go—a half a billion bucks down the drain.

So what can you do to make sure you never lose a half a billion smackers? It's actually not that difficult to avoid. The tricky part is thinking about QA and testing at the beginning of the project and developing a comprehensive test plan along with your requirements documents and technical specifications. The second tricky part is making sure that you've budgeted enough time and money for testing.

Unit Testing

Unit testing is a testing method that emphasizes testing software at the “unit” level, starting with the simplest units and moving up. Rigorous unit testing should be performed during all stages of development. For example, if you're developing in the Java 2 Enterprise Edition (J2EE) environment (see Chapter 9), Junit is a great framework for unit testing your J2EE building blocks. Check out this link for an informative article that explains Junit: http://developer.java.sun.com/developer/technicalArticles/J2EE/testinfect/.

Integration Testing

Unit testing is great for certifying that software components are working correctly, but eventually you'll need to test that whole subsystems work well when integrated and that the entire system works well as a whole. The way to approach this is to put together a comprehensive testing plan. A great place to start is—you guessed it—your requirements document. Starting with your initial requirements, you can put together a rough testing plan and sample data. For example, if your original use case is “a user must be able to write an e-mail message,” that might translate into several real use cases:

  1. User is notified that new e-mail message is waiting for him.

  2. User brings up a list of new e-mail messages waiting.

  3. User selects an e-mail message to view and reads contents of message.

  4. User indicates that he is finished and returns to the list of new messages.

These real use cases might appear in your technical specification document (see the section The Technical Specification Document in this chapter). Each of these use cases can be translated into a test scenario. For example, take the first real use case from the previous list; the test scenario might read like this (depending on your user interface choices):

  1. Send a new message to user X.

  2. User X should see a notification dialog pop up on her screen with an “OK” button on it.

  3. User X should press the “OK” button, and the notification should disappear.

That's the level of granularity you're going for in a test scenario: extremely fine, action-by-action. This approach minimizes confusion on the part of your testers and makes sure different testers consistently test the system. The following are some of the items that might be included in a test plan, depending on the particular project:

  • Restatement of the project's purpose.

  • Test cases that are built out of granular real use cases (as shown earlier).

  • Naming conventions and definitions used in the project, including explanations of all three letter acronyms (TLAs).

  • Overall project organization and people/roles involved (including who is responsible for the testing).

  • Training needs for testers. What do your testers need to know before they can adequately test the software?

  • Testing priorities and focus. Make sure your testers don't get sidetracked testing a part of your application that isn't ready or isn't relevant.

  • Test outline: a full description of the test approach by test type, feature, functionality, process, system, module, and so on and so forth.

  • Discussion of any specialized software or hardware tools that will be used (such as special testing tools) and in what capacity these are being used.

  • Test environment: hardware, operating systems, browser versions, other required software, and so on.

  • Problem tracking and resolution tools and processes. How are you going to track problems found? In many projects I've worked on, especially when dealing with a new development environment, we start by building a small bug-tracking application using the same tool with which we'll be building whatever we're building.

  • Test metrics to be used. How are you going to grade bugs, and what do those gradations mean?

An important hurdle to get over is the mindset that bugs are bad. If bugs are bad, then people will avoid finding them and not record them in your bug-tracking system. You won't have any bugs in your bug-tracking system, but that doesn't mean there aren't any bugs. This is also a good argument for why developers shouldn't test their own code. You should also think about what your testers are not testing. This is especially important if you have part-time testers, testers who hold other jobs but have agreed to or have been assigned to be testers as part of your project. Set a quota for the number of bugs you want found, and give a reward, even a silly reward like a conference knickknack, to the tester who finds the most (valid) bugs. Good testing needs to be recognized and rewarded.

The Software QA/Test Resource Center (http://www.softwareqatest.com/) is a valuable resource for QA and testing information.

Bugzilla (available at http://www.mozilla.org/bugs/) is an excellent open-source bug- tracking system. It was produced by the folks at the Mozilla open-source browser group to track bugs on their ongoing efforts, but they built it so well that it could be used as a general-purpose tool for bug tracking and resolution.

Mercury Interactive (http://www.mercuryinteractive.com/) provides an evaluation version of their Astra product, which is a pretty comprehensive Web application testing tool. With testing tools like Astra, money sometimes becomes a factor in QA though; remember to budget for these types of tools in your initial plans. Not having good testing tools will definitely cost you more in the long run.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset