Black Box Software Testing

Fall 2004

Study Guide--Part 1

Copyright (c) Cem Kaner

Here is the first part of your 2004 study guide. All questions on my tests and exams come from the study guide.

I will add up to 10 questions to this guide over the rest of the term.

I invite you to submit candidate questions for the study guide. I will give students 1 (one) bonus point per excellent question, up to 5 points per student.

The midterm will be on October 19. The typical midterm includes questions that total between 90 and 110 points, where

I will get the graded midterm back to you by October 26 (probably) or 28 (definitely). Your last day to drop the course in good standing will be October 29.

We will have "make-up" classes on September 25, October 9, and October 30 (all in this room, at 1230 pm).

I will host an exam review study session at the Sun Shoppe Cafe on Saturday, October 16 from 10 am until whenever. If you arrive before 11, I'll pay for your breakfast or coffee (or whatever). In this session, you will split into groups and discuss the draft answers you've come up with. I'll be available for brief consultation if you urgently need it, but this is a time for you to compare notes with each other, not to get yet more blather from me. There will probably also be a review session on October 18 (6 pm to whenever) in the Olin lounge. I may or may not be able to join you at that meeting.

The course final exam is currently scheduled at 330 to 530 pm on Wednesday December 15.

Notes on Studying & Answering Test Questions

Because you have plenty of time to work with these questions, I can expect well-organized, well-focused, thoughtful answers. For additional guidance, I suggest my paper on assessment in the testing course http://www.testingeducation.org/articles/assessment_in_the_software_testing_course_wtst_2003_paper.pdf, or these shorter discussions on answering essay questions:

Here are some additional suggestions:


Definitions

  1. Attack
  2. Behavioral questions
  3. Boundary chart
  4. Best representative
  5. Bug regression
  6. Change detector
  7. Charter (of a testing session)
  8. Comparison function
  9. Complete testing
  10. Computational oracle
  11. Configuration-dependent failure
  12. Corner case
  13. Cost of quality
  14. Coverage
  15. Credibility of a test
  16. Delayed effect bug
  17. Design error
  18. Diagnostics-based stochastic testing
  19. Disfavored stakeholder
  20. Domain testing
  21. Dumb monkey
  22. Equivalence class
  23. Exploratory testing
  24. Failure mode and effects analysis
  25. Fault vs. failure vs. defect
  26. Heuristic
  27. Implicit specifications
  28. Inattentional blindness
  29. Instrumenting a program
  30. Inverse oracle
  31. Old-fix regression
  32. Opportunity cost of a test
  33. Oracle
  34. Ordinal variable
  35. Output domain
  36. Postcondition data
  37. Power of a test
  38. Precondition state
  39. Prevention costs
  40. Probes
  41. Procedural testing
  42. Project inertia
  43. Reference program
  44. Side effect of a measurement
  45. Smart monkey
  46. Smoke testing
  47. Test idea
  48. Testability
  49. Testing project plan
  50. Time box

Short Answer

S.1. What is the primary difference between black box and glass box testing? What kinds of bugs are you more likely to find with black box testing? With white box?

S.2. Consider a program with two loops, controlled by index variables. The first variable increments (by 1 each iteration) from -3 to 20. The second variable increments (by 2 each iteration) from 10 to 20. The program can exit from either loop normally at any value of the loop index. (Ignore the possibility of invalid values of the loop index.)

S.3 In the Print Options dialog in Open Office Writer, you can mark (Yes/No) for inclusion on a document:

(a) Would you do a domain analysis on these (Yes/No) variables? Why or why not?

(b) What benefit(s) (if any) would you gain from such an analysis?

S.4. Compare, contrast, and give some examples of internal failure costs and external failure costs. What is the most important difference between these two types of failure cost?

S.5. List (and briefly describe) four different missions for a test group. How would your testing strategy differ across the four missions?

S.6. What is the analogy between sales and bug reporting? What do you think of this analogy? Why?

S.7. What do we mean by "diverse half-measures"? Give some examples.

S.8. How could you design tests of implicit requirements? Give some examples that illustrate your reasoning or approach.

S.9. Give two examples of defects you are likely to discover and five examples of defects that you are unlikely to discover if you focus your testing on line-and-branch coverage.

S.10. Give three different definitions of “software error.” Which do you prefer? Why?

S.11. How can test documentation support delegation of work to new testers? What would help experienced testers who are new to the project? What would help novice testers?

S.12. Describe the characteristics of a good scenario test.

S.13. List and describe five different dimensions (different “goodnesses”) of “goodness of tests”.

S.14. What kinds of bugs might you be likely to miss if you use a reference program as a comparison oracle?

S.15. How does extended random regression work? What kinds of bugs is it good for finding?

S.16. Why is is useful to have a collection of context-free questions? What are context-free questions? How would you use them?

S.17. What are the differences between risk-oriented and procedural regression testing?

S.18. Describe three types of oracles used in automated testing.

S.19. What is a configuration test matrix? Draw one and explain its elements.

S.20. Contrast the reporting and troubleshooting costs of unit-level regression testing and GUI-level regression.

S.21. Use Weinberg's definition of quality. Suppose that the software behaves in a way that you don't consider appropriate. Does it matter whether the behavior conflicts with the specification? Why? Why not?

S.22. What does it mean to do maintenance on test documentation? What types of things are needed and why?

S.23. Why would a company start its project by following Standard 829 but then abandon that level of test documentation halfway through testing?

S.24. How could a test suite support prevention of defects?

S.25. Why would you use scenario testing instead of domain testing? Why would you use domain testing instead of scenario testing?


Long Answer

L.1. Compare and contrast all-pairs combination testing and scenario testing. Why would you use one over the other?

L.2. Ostrand & Balcer described the category-partition method for designing tests. Their first three steps are:

    1. Analyze
    2. Partition, and
    3. Determine constraints

Apply their method to this function:

I, J, and K are unsigned integers. The program calculates K = I *J. For this question, consider only cases in which you enter integer values into I and J.

Do an equivalence class analysis on the variable K from the point of view of the effects of I and J (jointly) on K. Identify the boundary tests that you would run (the values you would enter into I and J) in your tests of K.

Note: In the exam, I might use K = I / J or K = I + J or
K = IntegerPartOf (SquareRoot (I*J))

L.3. We are going to do some configuration testing on the Mozilla Firefox Office browser. We want to test it on

Note: In the exam, I might change the number of operating systems, printers, modem types, or display

L.4. Calc allows you to protect cells from modification, and to use a password to override the protection. Think about your testing in terms of both, setting the password and entering the password later (after it has been set, in order to unprotect a cell).

L.5. Imagine testing a file name field. For example, go to an Open File dialog, you can enter something into the file name field.

Do a domain testing analysis: List a risk, equivalence classes appropriate to that risk, and best representatives of the equivalence classes.

For each test case (use a best representative), briefly explain why this is a best representative. Keep doing this until you have listed 10 best-representative test cases.

L.6. In the Windows version of OpenOffice, you can create a spreadsheet in Calc, then insert it into Writer so that when you edit the spreadsheet file, the changes automatically appear in the spreadsheet object when you reopen the Writer document.

Describe four examples of each of the following types of attacks that you could make on this feature, and for each one, explain why your example is a good attack of that kind.

(Refer specifically to Whittaker, How to Break Software and use the types of attacks defined in that book. Don’t give me two examples of what is essentially the same attack. In the exam, I will not ask for all 16 examples, but I might ask for 4 examples of one type or two examples of two types, etc.)

L.7. Imagine that you were testing simple database queries with OpenOffice Calc. Describe four examples of each of the following types of attacks that you could make on this feature, and for each one, explain why your example is a good attack of that kind.

(Refer specifically to Whittaker, How to Break Software and use the types of attacks defined in that book. Don’t give me two examples of what is essentially the same attack. In the exam, I will not ask for all 16 examples, but I might ask for 4 examples of one type or two examples of two types, etc.)

L.8. You are testing the group of functions that let you create and format a spreadsheet.

Think in terms of persistent data (other than the data you enter into the cells of the spreadsheet). What persistent data is (or could be) associated with a spreadsheet? List three types. For each type, list 2 types of failures that could involve that data. For each type of failure, describe a good test for it and explain why that is a good test for that type of failure. (There are 6 failures, and 6 tests, in total).

L.9. You are testing the group of functions that let you create and format a spreadsheet.

Think in terms of compatibility with external software. What compatibility features or issues are (or could be) associated with spreadsheets? List three types. For each type, list 2 types of failures that could involve compatibility. For each type of failure, describe a good test for it and explain why that is a good test for that type of failure. (There are 6 failures, and 6 tests, in total).

L.10. You are testing the group of functions that let you create and format a spreadsheet..

Suppose that a critical requirement for this release is scalability of the product. What scalability issues might be present in a spreadsheet? List three. For each issue, list 2 types of failures that could involve scalability. For each type of failure, describe a good test for it and explain why that is a good test for that type of failure. (There are 6 failures, and 6 tests, in total).

L.11. Suppose that you find a reproducible failure that doesn’t look very serious.

L.12. The course notes describe a test technique as a recipe for performing the following tasks:

How does regression testing guide us in performing each of these tasks?

L.13. The course notes describe a test technique as a recipe for performing the following tasks:

How does scenario testing guide us in performing each of these tasks?

L.14. The course notes describe a test technique as a recipe for performing the following tasks:

How does specification-based testing guide us in performing each of these tasks?

L.15. The course notes describe a test technique as a recipe for performing the following tasks:

How does risk-based testing guide us in performing each of these tasks?

L.16. Imagine that you were testing how Calc protects cells from modification.

L.17. You're testing the goal-seeking function of Calc. What do people do with goal-seeking? Give three examples, one each of three substantially different uses of goal-seeking. (In total, there are 3 examples.) Now consider our list of ways to create good scenarios, and focus on "List possible users, analyze their interests and objectives." Describe two scenario tests based on these considerations. For one of them, explain why it is a good scenario test.

L.18. List and explain four claimed strengths of manual scripted tests and four claimed weaknesses.

L.19. You can import Microsoft Excel spreadsheets into OpenOffice Calc by opening an Excel-format file with Calc or by copy/pasting sections of the Excel spreadsheet to the Calc spreadsheet. Think about planning the testing of the importation. List 10 types of data or attributes of the data that you should test and for each, briefly describe how you will use an oracle to determine whether the program passed or failed your tests.

L.20. A client retains you as a consultant to help them use a new GUI-level test automation tool that they have bought. They have no programmers in the test group and don't want to hire any. They want to know from you what are the most effective ways that they can use the tool. Make and justify three recommendations. In your justification, list some of the questions you would have asked to develop those recommendations and the type of answers that would have led you to those recommendations.

L.21. Compare the evolutionary and waterfall lifecycle models. Consider the four factors that project managers have to trade off against each other, and any additional issues (1 to 3 of them) that you think are important.

L.22. Describe four characteristics of a good test strategy. Describe a specific testing strategy for open office, and explain (in terms of those four characteristics) why this is a good strategy.

L.23. In the slides, we give the advice, "Over time, the objectives of testing should change. Test sympathetically, then aggressively, then increase complexity, then test meticulously." Explain this advice. Why is it (usually) good advice. Give a few examples, to apply it to the testing of Open Office's calc program. Are there any circumstances under which this would be poor advice?

L.24. Your company decides to outsource test execution. Your senior engineers will write detailed test scripts and the outside test lab's staff will follow the instructions. How well do you expect this to work? Why?

L.25. Suppose that Boeing developed a type of fighter jet and a simulator to train pilots to fly it. Suppose that Electronic Arts is developing a simulator game that lets players "fly" this jet. Compare and contrast the test documentation requirements you would consider appropriate for developers of the two different simulators.


Copyright (c) Cem Kaner 2004

This work is licensed under the Creative Commons Attribution-ShareAlike License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

These notes are partially based on research that was supported by NSF Grant EIA-0113539 ITR/SY+PE: "Improving the Education of Software Testers." Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.