Footnotes

1 The Technical Test Analyst’s Tasks in Risk-Based Testing

1. For more information on risk identification and analysis techniques, you can see either Managing the Testing Process, 3rd Edition (for both formal and informal techniques, including Pragmatic Risk Analysis and Management) or Pragmatic Software Testing (which focuses on Pragmatic Risk Analysis and Management), both by Rex Black. If you want to use Failure Mode and Effect Analysis, then we recommend reading D.H. Stamatis’s Failure Mode and Effect Analysis for a thorough discussion of the technique, followed by Managing the Testing Process, 3rd Edition for a discussion of how the technique applies to software testing.

2. You can find examples of how to carry out risk-based test reporting in Rex Black’s book Critical Testing Processes and in the companion volume to this book, Advanced Software Testing – Volume 2, which addresses test management. A case study of risk-based test reporting at Sony is described in “Advanced Risk Based Test Results Reporting: Putting Residual Quality Risk Measurement in Motion,” found at http://www.rbcs-us.com/images/documents/STQA-Magazine-1210.pdf.

3. You can find a detailed description of the use of risk-based testing on a project on the RBCS website in the article “A Case Study in Successful Risk-based Testing at CA,” found at http://www.rbcs-us.com/images/documents/A-Case-Study-in-Risk-Based-Testing.pdf.

2 Structure-Based Testing

1. The United States Federal Aviation Administration makes a distinction between branch coverage and decision coverage with branch coverage deemed weaker. If you are interested in this distinction, see the FAA report Software Verification Tools Assessment Study (June 2007) at www.faa.gov

2. This may not be strictly true if the compiler optimizes the code and/or short circuits it. We will discuss those topics later in the chapter. For this discussion, let’s assume that it is true.

3. Definition from The Software Test Engineer’s Handbook by Graham Bath and Judy McKay (Rocky Nook, 2014).

4. The ISTQB syllabus refers to standard DO-178B. However, this standard was replaced by DO-178C in January 2012. Therefore, because a few of the definitions are slightly different, we have moved to that standard.

5. DO-178C: Software Considerations in Airborne Systems and Equipment Certification

6. This technique—and our definition—is adapted from TMap Next – for Result-Driven Testing by Tim Koomen, Leo van der Aalst, Bart Broekman, and Michiel Vroon (UTN Publishers, 2006).

7. A Practical Tutorial on Modified Condition/Decision Coverage by Kelly J. Hayhurst, Dan S. Veerhusen, John J. Chilenski, and Leanna K. Rierson, http://shemesh.larc.nasa.gov/fm/papers/Hayhurst-2001-tm210876-MCDC.pdf

8. Note that this is the Delphi way of saying (!F)

9. This code is designed to run in UNIX. The if() statement in line 13 will always evaluate to TRUE when the application is running in the foreground. To test the FALSE branch, it would require that the application be running in the background (with input coming from a device or file other than a keyboard). Because this is already way down in the weeds, we will leave it there.

10. http://portal.acm.org/citation.cfm?id=800253.807712 or http://en.wikipedia.org/wiki/Cyclomatic_complexity and click on the second reference at the bottom.

11. For example, NIST Special Publication 500-235, Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric, found at http://www.itl.nist.gov/lab/specpubs/sp500.htm

12. A connected component, in this context, would be a called a subroutine.

13. The Golden Age of APIs, http://www.soapui.org/The-World-Of-API-Testing/the-golden-age-of-apis.html

14. Ibid.

15. Ibid.

16. API Integrity: An API Economy Must-Have, http://alm.parasoft.com/api-testing-gartner

17. Testing in the API Economy: Top 5 Myths, by Wayne Ariola and Cynthia Dunlop, 2014. A white paper available from Parasoft.

18. ISTQB no longer requires this particular white-box technique and—partially because we could find nowhere that it is actually performed—we have dropped it from this book.

3 Analytical Techniques

1. “A Complexity Measure,” Thomas J McCabe, IEEE Transactions on Software Engineering, Vol. SE-2, No. 4, December 1976, http://www.literateprogramming.com/mccabe.pdf

2. http://www.musanim.com/miller1956/ (This link was selected because there are several different language translations here.)

3. Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric, NIST Special Publication 500-235, Arthur H. Watson, and Thomas McCabe, http://www.mccabe.com/pdf/mccabe-nist235r.pdf

4. The earlier work was a paper by S. Rapps and E. J. Weyuker named “Data flow analysis techniques for test data selection.” Sixth International Conference on Software Engineering, Tokyo, Japan. September 13–16, 1982.

5. To be a bit more precise, languages that create a copy of the value of the function’s parameter when a function is called are said to use call by value parameters. If instead the function receives a pointer to the memory location where the parameter resides, the language is said to use call by reference parameters. Languages using call by value parameters can implement call by reference parameters by passing in a pointer to the parameter explicitly. This can become a source of confusion because, if the parameter is an array, then a pointer is passed even in call by value languages. This aids efficiency because the entire contents of the array need not be copied to the stack, but the programmer must remember that modifications to the values in the array will persist even after the function returns.

6. The Compiler Design Handbook: Optimizations and Machine Code Generation, Second Edition, by Y.N. Srikant and Priti Shankar.

7. Software Testing Techniques, Second Edition, by Boris Beizer.

8. This example comes from the Wikipedia article on Splint. This reference was inadvertently left off in the first edition.

9. The URL that was in the first edition of this book is now dead. Apparently the code conventions document was last updated in 1999, and was somewhat out of date. However, there is a lot of discussion on the Web about coding conventions, so we recommend searching on that phrase—there appears to be a lot of information out in the ether.

10. http://docs.oracle.com/javase/specs/

11. Foundation level syllabus, section 2.2.2

12. The command is hexcvt < file1 > file2 &. That assumes that the compiled executable is called hexcvt, that file1 contains a string of characters, and that file2 will be used to catch the output. The ampersand makes it run in the background. In case you wanted to know…

4 Quality Characteristics for Technical Testing

1. Usually followed with an expansive arm movement, a knowing wink, and a quick run toward the door.

2. http://microsoft-news.tmcnet.com/microsoft/articles/81726-microsoft-study-reveals-that-regular-password-changes-useless.htm

3. http://security.freebsd.org/advisories/FreeBSD-SA-00:53.catopen.asc

4. Just one example: http://defensetech.org/2008/08/13/cyber-war-2-0-russia-v-georgia/

5. A quote we have heard enough times to scare us!

6. http://www.gfi.com/blog/insidious-threats-logical-bomb/

7. https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet

8. http://www.sans.org/reading_room/whitepapers/windows/microsoft-windows-security-patches_273

9. Of course, this won’t help your organization if you are targeted by someone who invents the hack.

10. cve.mitre.org

11. http://nvd.nist.gov/cwe.cfm

12. capec.mitre.org

13. http://capec.mitre.org/about/index.html

14. OWASP.org

15. Note that penetration tests may bring their own baggage. As suggested by commenter Bernard Homès, “Such tests (1) are difficult to implement, (2) need to be authorized by upper management as they can be interpreted as hacking attempts, (3) need reformed hackers to implement (white hat vs black hats).”

16. There’s a further complication that must be considered when moving beyond considering purely the electronic circuitry to looking at the physical device as a whole. When looking at physical objects, there are two different types of failures. One type is due to wear, which is a result of mechanical and chemical processes and can be exacerbated by misuse (e.g., failure to properly lubricate a bearing). Wear happens normally over time, except where accelerated by misuse. Another type of failure results from a defect in the components, such as the problems revealed recently with the bolts in the new San Francisco–Oakland Bay Bridge (http://www.sfexaminer.com/sanfrancisco/bay-bridge-bolt-problem-arose-from-quality-control-lapses-officials-say/Content?oid=2336143). Defects will result in premature (and often abrupt) failure, while wear is a gradual and usually detectable process.

17. Remember that ISTQB treats the terms bug, defect, and fault as synonyms. ISO 9126 appears to use the term fault to mean bug, defect, and sometimes failure. Wherever possible, we have tried to reword the standard to meet ISTQB definitions.

18. ISO 9126 uses the term estimated faults a number of times. The standard notes that these will tend to come from either past history of the system or a reference model.

19. DoD Software Fault Patterns, Dr. Nikolai Mansourov, https://buildsecurityin.us-cert.gov/sites/default/files/Mansourov-SoftwareFaultPatterns.pdf

20. This is also known as fault seeding, which we cover in Chapter 6.

21. If you are interested in more information on this topic, we suggest, http://www.washingtonpost.com/national/health-science/there-really-are-50-eskimo-words-for-snow/2013/01/14/e0e3f4e0-59a0-11e2-beee-6e38f5215402_story.html.

22. The Software Test Engineer’s Handbook

23. Performance Testing Guidance for Web Applications, http://msdn.microsoft.com/en-us/library/bb924375.aspx

24. Satires of Juvenal. Probably not talking about software, but still worth considering...

25. Arno A. Penzias and Robert W. Wilson, working for Bell Labs in the early ’60s, accidentally discovered the first observational proof of the “Big Bang” when nothing they could do would eliminate the static they kept picking up with their microwave receiver. However, they spent months not believing what their tests were telling them. See http://www.amnh.org/education/resources/rfl/web/essaybooks/cosmic/cs_radiation.html.

26. A memory shortage caused the Spirit Mars rover to become unresponsive on January 2, 2004. A brief summary can be found at: http://www.computerworld.com/s/article/89829/Out_of_memory_problem_caused_Mars_rover_s_glitch.

27. These come from a book by Ian Molyneaux, The Art of Application Performance Testing.

28. This appears to be an awkward translation. It would not make much sense to require a system to average N number of error messages over a certain period. Perhaps a better term would be maximum allowed number of error messages. We will use that phrase in future metrics.

29. An amusing take on this, the “marshmallow test,” can be found at http://www.newyorker.com/magazine/2009/05/18/dont-2.

30. Pairwise testing and classification trees are discussed in Advanced Software Testing, Vol. 1

31. The best definition we can find for this term is a multidimensional data set. ISO 9126-2 does not give a formal definition.

32. Interested readers can find this technique in Advanced Software Testing, Volume 1 or at the website, pairwise.org.

33. A discussion of the management implications of tools as described in the ISTQB Advanced Test Manager syllabus can be found in Rex’s book Advanced Software Testing: Volume 2, Second Edition.

5 Reviews

1. Examples of such checklists, and ideas on how to use them effectively, can be found in Karl Wiegers’s books Peer Reviews in Software: A Practical Guide and Software Requirements, Third Edition.

2. One reviewer, Bernard Homés, said that he did not consider excessive effort a true testability defect. Whether excessive test effort is a testability defect or just a concern, it should be noted in the review.

3. See Capers Jones’s book Software Assessments, Benchmarks, and Best Practices.

4. The syllabus uses the terms pattern and anti-pattern, but the general concept is the same as best practice and worst practice, respectively. In an influential book, Design Patterns: Elements of Reusable Object-Oriented Software, Erich Gamma coined the term design patterns, referring to general, widely used solutions that can be applied to common problems and that have proven their worth in solving these problems. An anti-pattern refers to the opposite situation, where the solution has indeed been widely used but has in fact proven to be dangerous or just suboptimal compared to some other solution. Those contrasts correspond to the usage of the common terms best practice and worst practice; however, best practice and worst practice have the advantage of being widely used outside of software engineering and thus are more easily understood by business stakeholders as well as technical stakeholders.

5. This definition derives from IEEE Standard 1471:2000, which has been superseded by IEEE Standard 42010:2011, and from Bass, et al.’s book Software Architecture in Practice, 2nd Edition.

6. For the source of this definition, see: http://www.sei.cmu.edu/architecture/.

7. For the full list, including the specific elements to look for and metrics to check, see http://www.codeproject.com/KB/architecture/SWArchitectureReview.aspx. There is also information on architecture reviews available at http://portal.acm.org/citation.cfm?id=308798.

8. Well, the list is actually sourced from Karl Wiegers, found at his website http://www.processimpact.com/pr_goodies.shtml. So, we suppose that Karl actually provided it. Thanks Karl!

9. This is drawn from Brian Marick’s The Craft of Software Testing.

10. The complete list can be found at http://wiki.openlaszlo.org/Code_Review_Checklist.

11. Hungarian notation is a term that originated with Microsoft thanks to its chief architect, Dr. Charles Simonyi. In Hungarian notation, the variable has a prefix that indicates the type. After its adoption within Microsoft, it spread to other companies, with the name “Hungarian notation” since it makes variables look foreign and because Simonyi was born in Hungary.

6 Test Tools and Automation

1. See “Quality Goes Bananas” on the RBCS articles page, www.rbcs-us.com.

2. A meme is an idea, pattern of behavior, or value that is passed from one person to another. A meme complex is a group of related memes often present in the same individual’s mind.

3. There’s a discussion about the use of mutation testing at Google in Chapter 18 in the book Beautiful Testing, edited by Tim Riley and Adam Goucher.

4. This example is a condensed version of the article “Quality Goes Bananas,” which Rex cowrote with Daniel Derr and Michael Tyszkiewicz of Arrowhead Electronic Healthcare. Rex thanks Daniel and Mike for their assistance with that article and for the educational experience of building the monkey test tool. The full article can be found at www.rbcs-us.com.

5. You can read more about this system in the article “Mission Made Possible,” written by Greg Kubaczkowski and Rex Black, posted on the RBCS website at www.rbcs-us.com.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset