Involving Testing Students in Software Projects, Part II

            WTST 2004

            Patrick J. Schroeder, Sue Kearns

            University of Wisconsin - Milwaukee

            Department of EE/CS

            Milwaukee, WI 53211

            {pats | skearns}


1      Introduction. 1

2      Unit Testing Exercise. 3

3      System Testing Exercise. 5

3.1.       Description. 5

3.2.       History. 5

3.3.       Teaching Objectives. 6

3.4.       Software Application. 7

3.5.       Project Execution. 7

3.6.       Results. 9

3.7.       Evaluation. 10

3.8.       Probable Improvements. 11

3.9.       The Test Design Problem.. 12

4      Conclusion. 12

5      References. 14

Appendix A: Course Information. 15

Required Textbook. 15

Course Outline. 15

Course Projects and Grading. 17

Appendix B: Unit Testing Exercise. 18

Appendix C: System Testing Project 22

DMAS System Test Plan Template. 22

Appendix D: Software Delivery Schedule. 24

Appendix E: Test Driven Design Survey. 26

Appendix F: System Testing Project Survey. 29


1       Introduction

Software testing is arguably the least well understood phase of the software development lifecycle [1].  This makes the important task teaching software testing a challenging exercise.  Many open questions remain on exactly what and how to teach software testing. 

In our work, we teach testing using the learn by doing approach.  This means we directly involve students in writing and testing software at the unit level, and directly involve students in system-level testing of ongoing software development projects.  To some extent, we operate this way because it is our bias.  We learned to test software by doing it and so we teach it that way.  However, learn by doing is also often touted as an effective teaching technique by researchers.  The assumption is that "students learn better in situations which more closely approximate the situations in which they will use their knowledge" [2].  These situations allow students to learn from their mistakes, as well as, develop their own individualized constructions of the knowledge obtained. Clearly, the learn by doing approach is best suited to students that prefer an active learning style and may not be effective for others.  Our goal in presenting this material in not to sell the learn by doing approach, but rather to gain insights into teaching software testing that can be applied regardless of the teaching technique used.

Our software testing course, CS790 Software Testing and Quality Assurance, was taught in the Fall of 2003 at the University of Wisconsin - Milwaukee (UWM).  The course is a graduate-level course and has a prerequisite of CS536 Software Engineering. UWM now also offers CS657 Introduction to Software Testing.  At some point, the introductory course will be the prerequisite for graduate-level testing course, but it is not at the current moment.

Our course is intended to be a "Software Test Engineering" Course.  The focus of the course is on applying testing strategies, tools and techniques to solve complex testing problems.  In the Fall of 2003, the students in the course had varying backgrounds.  Some student filled the prerequisite by taking a software engineering course at other universities and may have had little exposure to software testing concepts.  On the other hand, 4 of 22 students in the course had industrial software testing experience.  While not intended to be a course for students with no exposure to testing concepts, the course did cover terminology and a many introductory topics.  The level of coverage in many areas was insufficient, especially for students with little experience with software systems.  An outline of the topics covered in the course can be found in Appendix A.

In this paper, we will discuss the two learn by doing testing projects that were assigned during the course.  The assignments were designed to fit the "Developer Tester vs. Independent Tester" model [3].  As a software engineer, one may be called upon to develop/write/create software.  In these situations, an engineer must take on the role of "developer tester."  In this role, the engineer contributes to the quality of their own software by thoroughly testing it at the unit-level and/or integration-level.  The first project given to the students was a unit testing project that incorporated the Test -Driven Development (TDD) process [4].

As a software engineer, one may also be called upon to test software developed by others.  In these situations, an engineer must take on the role of "independent tester," that is, independent (organizationally or otherwise) of the process that created the software.  In this role, the engineer contributes to the quality of the delivered product by testing it thoroughly at the upper levels of testing (e.g., feature, system, or acceptance level of testing).  The second project given to the students was a system testing exercise in which they were assigned the role of independent tester for an ongoing academic software development project.

In section 2 of the paper, we discuss the unit testing exercise using TDD.  In section 3 of the paper, we discuss the system testing exercise.  Section 4 presents our conclusion and future work.

2       Unit Testing Exercise

There exists a long history and strong theoretical basis for unit testing.  However, in many instances, theory has contributed little to the practice of unit testing.  In our experience, and in the experience of others [5], regardless of what the software development process dictates, many developers test their own code by "poking around," meaning that they try a few things and then release the code to the build.  Craig and Jaskiel state that few companies train developers to do testing, and few companies invest in code coverage tools essential in the theoretical approach to unit testing [6].

With the relatively recent development of Agile Methods [7, 8], another approach to testing at the unit-level, or class-level was created.  TDD is a software development process with and integral testing strategy.  Under TDD, test cases are written before code is produced and all test cases must be automated.  A tool referred to as a unit testing framework is used to automate the test cases [9]. 

While it is still too early to know the impact of TDD on software development and testing, many reports indicate that software developers will use the process (traditional unit test strategies, for various reasons, are not used).  The testing community has pointed out that TDD is not "real" testing; however, as often is the case, the best tool is the tool that gets used.

TDD was incorporated into the course largely because we think it has merit.  While it is likely that the TDD process will evolve over time, automating unit test cases using a unit testing framework seems appealing to developers and appears to be a significant "next step" in improving the practice of unit testing.  Including TDD in the course appears to be a good way to expose students to a popular current practice, as well as, highlight the difference between TDD and traditional unit testing.           

After covering material on TDD and unit testing frameworks, the students were given an assignment to implement an object-oriented (OO) class in C++.  The OO class assigned is an important part of the software system to be used later in the semester in the system testing exercise.  The teaching objectives for the unit testing exercise are listed in table 1.  A description of the unit testing assignment can be found in Appendix B.

Table 1.  Teaching Objective for the Unit Testing Assignment


Teaching Objectives


Students shall be able to develop an object-oriented class to specification using the TDD process.


Students shall be able use a unit testing framework (CppUnit) to create and execute automated test cases required by the TDD process.


Students shall be able to describe white-box and black-box unit testing techniques.


Students shall be able to explain the difference between "write all tests first", TDD, and "test later" unit testing approaches.


Students shall be able to explain why TDD is not "testing" in the traditional sense of the word.


Students shall be able to explain the process of "structured unit testing" [10].


Immediately after completing the exercise, the students were given a survey on their experiences with TDD.  Some of the results of this survey can be found in Appendix E.  In general, most students believed that compared to their "usual" development process, TDD produced higher quality code, but that it took longer to complete.  It is also interesting to note that 58% of the student did not believe their code to be adequately tested using the TDD process.

As part of the assessment of the student's unit testing assignment, we measured the statement coverage of the OO class when executing their own test suites.  We also developed our own test suite of 30 test cases based on the functional requirements of the OO class and executed it on each of the student's OO classes.  The results are listed in Table 2.  The statement coverage achieved by most students is in the correct range for industrial software.  One might not expect to exceed 85% statement coverage for a variety of reasons, including internal error conditions that are difficult to produce.  The coverage tools used were not capable of producing other coverage measures (e.g., branch, decision, path, dataflow).  The more interesting number is the test pass rate on the functional test suite.  At an average of 70%, this is well below what we would have expected to find, given that the code had already been tested under TDD.  Contributing factors to this low pass rate are the complexity of the functional requirements of the OO class, the inexperience of some graduate students with OO programming, and the inexperience of the students in using TDD.  Clearly, more study is needed to quantify the difference in fault detection capability of TDD compared to that of traditional unit testing.

Table 2.  Unit Testing Assignment Statistics


Statement coverage achieved by student's test suite

Pass rate on professor's test suite










Following the unit testing assignment, information on control flow graphs and traditional unit testing adequacy criteria were presented in lecture.  We feel that this ordering and approach to presenting the unit testing material was effective in emphasizing the differences between TDD and traditional unit testing.


3       System Testing Exercise


3.1.              Description

In the system testing exercise, teams of students from the CS790 Software Testing and Quality Assurance are paired with teams of students from the CS536 Software Engineering (SE) course. 

The students in the software engineering course are assigned to an SE team.  The SE team is 3-5 students working on a semester long software project to design and implement a relatively complex industrial software application.  All SE teams implement the same software project.

The students in the software testing course are assigned to an Alpha team, so named because they perform "alpha" testing.  Alpha teams consist of 3-4 students that perform "friendly," functional, system-level testing in the SE development environment. 

Each SE team is then paired  with an Alpha team.  The SE teams use an incremental development process model.  They deliver 3-4 software increments to the Alpha teams for testing.  In the Fall of 2003 there were 7 pairs of teams.  The schedule of software deliveries can be found in Appendix D.  After the final increment, we conducted a final customer acceptance test on each project to evaluate the effectiveness of the SE and Alpha teams.

3.2.              History

A year earlier, in the Fall of 2002, a system testing exercise using pairs of SE and Alpha teams was conducted at UWM.  The exercise was the topic of a presentation at WTST 3.  The results of the system test exercise in Fall of 2002 were not encouraging.  After spending their time in the software testing course the students did a substandard job of testing the software engineering projects.  The final CA test of the SE projects easily found numerous, serious software defects.  This result was puzzling, and somewhat embarrassing at the same time.  During the presentation of results at WTST 3, many suggestions were made on how to improve the course and improve the education of software testers.  Many of these suggestions were implemented in the system test exercise in the Fall 2003.  The major changes included:

  • Reducing the complexity of the software application being developed.  In the previous system testing assignment, the application was extremely complex, resulting in largely unstable code from almost all SE teams.  Unstable code is very difficult to test and contributed to the poor performance of the Alpha teams.
  • Improved mentoring/management.  To improve performance, the SE and Alpha teams must be more closely managed and mentored.  Teams may make bad decisions early in the project and then have a difficult time recovering within the time constraints of the semester.   To improve this situation, status reports were more closely monitored and feedback was given when necessary.  Additionally, fewer lectures were given allowing more open class sessions spent answering questions and working examples.
  • Reducing expectations.  How well students can test software, even after an advanced software testing course, remains an open question.  Some students simply do not have the temperament or desire to perform testing activities.  Some students simply do not have time outside of class to contribute to the heavy work load the system testing exercise imposes.
  • Applying the system testing exercise in an advanced testing course.  The software engineering curriculum at UWM is in a state of transition.  The goal is to establish an introductory testing course at the undergraduate level and an advanced testing course at the graduate level.  The system testing exercise will be used at the graduate level.  When students have the correct prerequisites for the advanced course, the course contents will also be upgraded to an advanced level, and less introductory material on testing will be presented.


3.3.              Teaching Objectives

The teaching objectives for the system testing exercise are found in table 3.

Table 3.  Teaching Objectives for the System Testing Exercise


Teaching Objectives


Students shall be able to create and document a light-weight test plan (based on IEEE 829) from a relatively complex set of software specifications.


Students shall be able to use test objectives and inventories in the process of creating functional test cases from software specifications [6]


Students shall be able to conduct risk analysis to set testing priorities and constrain the number of test cases created.


Students shall be able to create an inventory tracking matrix to map functional test cases to test objectives and inventories.


Students shall be able to document functional test cases using a light-weight spreadsheet technique.


Student shall be able to discuss the context of the testing project [11].


Student shall be able to list the testing team's mission [11].


Students shall be able to conduct exploratory, scripted and regression testing.


Students shall be able to create clear, accurate, and effective bug reports using a defect tracking tool.


Students shall be able to report the results of testing sessions using test logs.


Students shall be able to demonstrate the use of test automation using scripting tools (Perl, Ruby) at the UI and API level.


Students shall be able to interact with the teammates and software developers in a productive and professional manner.


3.4.              Software Application

The software application selected for the system testing exercise was the Data Management and Analysis System (DMAS).  DMAS is used to manage workflow, data and perform analysis in the analytical chemistry laboratories.  This application was originally written in FORTRAN in the late 1980s when one of the authors was a programmer in the pharmaceutical industry.  The goals of the DMAS project are to:

1.       Implement a digital work list feature that allows the Lab Managers to assign tests to Lab Technicians, monitor status of tests, and review results of tests.

2.       Give Lab Technicians the capability to automatically produce a Product Sample Concentration Report (PSCR) from the data collected during product testing using High Pressure Liquid Chromatography (HPLC).

3.       Allow the Lab Manager to conduct statistical process control analysis using all, or a portion of, the historical data collected for any product under test.

4.       Provide data management to support the previous goals.

The DMAS requirements specification consists of 23 functional requirements, 3 non-functional requirements, and 14 use cases.

3.5.              Project Execution

The overall mission given to the Alpha test teams was to improve the quality of the DMAS product through defect detection.  Other important aspects of the mission included:

1)      Gaining a deep understanding of the DMAS specifications.

2)      Generating light-weight documentation for test plans and test cases.

3)      Finding important bugs early in each test cycle.

4)      Writing clear, accurate and effective bug reports.

5)      Reporting product status (functionality working/not working) using test logs.

6)      Gaining an understanding of the SE team's software capabilities and testing accordingly.

7)      Acting in a professional, respectful manner to avoiding demoralizing the SE team.

The SE and Alpha teams met for the first time at a review of the DMAS requirements document in week 7 of the 16 week course.  The Alpha teams were given lectures and a quiz on DMAS functionality in preparation for testing. 

Testing began in week 9 with increment 1.  The functionality delivered was not overly complex and most Alpha test teams applied exploratory testing exclusively.  At the completion of testing on increment 1, the Alpha teams received feedback from the SE teams on their work.  The main areas of concern included:

1)      A lack of coordination among test team member resulting in very similar/duplicate bug reports.

2)      A lack of sufficient detail in the bug reports to reproduce the problem.

3)      A lack of understanding of the DMAS specifications.

We were pleased with the status of the project at this point.  A great deal of testing had been conducted and many bug reports had been generated.  The number of concerns expressed by the SE teams was small.  The concerns were easy to address and correct moving forward.  The concerns did cause some confusion among the Alpha teams.  This gave a chance to discuss the project context and make decisions about how to conduct testing based on the project context.

The general project context was described as:

·        Academic environment, pseudo-project;

·        Inexperienced (as a team and with the software engineering process), part-time development team;

·        Inexperienced (as a team and with the testing process), part-time independent test team;

·        Relatively large and complex project;

·        Extremely tight schedule;

·        Incremental development model;

·        Development environment void of modern software tools.

Additionally, each Alpha team considered their own lower-level context which might include: the size of development team (3-5); the experience levels of the developers; and the level of motivation and/or enthusiasm of the developers, among other things. 

The project context prompted important discussions on how to approach testing under an incremental development model, and how to test to the maturity of the software.

The second software delivery, increment 2, included the Product Sample Concentration Report (PSCR).  The PSCR is complex report that includes a regression analysis and a statistical outlier test.  The PSCR is also given a high risk assessment, as data from the report is given to the U.S. FDA as evidence of the shelf-life stability of pharmaceutical products (in the original project, in any case).  This report requires that testers use techniques other than exploratory testing.  The report requires that testers understand the specification of the algorithms used, construct complex data sets to test those algorithms, and create data files (1-255 in number) in order to setup and execute those tests.

No Alpha test team reported on the functionality of the PSCR in increment 2.  This represented a failure to achieve the tester's mission.  Time pressures around the Thanksgiving Holiday played a part in this failure, as did the fact that most Alpha teams continued exploratory testing in increment 2.  The state of the software was such that bugs, especially user-interface bugs, were still easy to find.  Based on previous experiences, as well as, known trends, this behavior was not totally unexpected.  These trends can be described as:

·        Test what you know - testers get familiar with certain parts of the system and tend to test those parts more frequently and thoroughly, rather than attack new complex functionality.

·        Hard work equals progress - testers are executing tests, finding bugs, and writing good bug reports.  This gives them a sense of satisfaction, accomplishment, and progress.  Unfortunately, a higher priority area is being neglected.

In 2002, during the system testing exercise, we did not catch this trend until much too late in the project.  In 2003, we were able to notice the trend earlier and intervene in a productive manner.  A detailed discussion of the algorithms and the use of XCEL spreadsheets as partial oracles helped students formulate effective test cases.  In the final incremental delivery, which included another complex algorithm, most Alpha teams were prepared and could effectively test the process control features.

The project wrapped up in the 16th week of the course with the Alpha testers performing their final testing.  In some cases, bug fixes were allowed after the "official" code freeze for simple regression faults and blocking faults that appeared to have simple solutions. 

After the final Alpha team testing, we executed a final Customer Acceptance (CA) test suite on projects that were stable.  The Alpha test teams were made well aware, far in advance, that we would conduct a final CA test.  The implied threat was that the CA results would be used in assessing the Alpha team's performance. 

3.6.              Results

Table 4 lists statistics from the final evaluation of the 7 DMAS projects.

Table 4.  Test statistics for DMAS projects

Team Number

Number of documented Alpha team test cases

Defects reported
by Alpha team

Additional defects found in CA

Total defects

Defect Removal Efficiency

Team 1



Not evaluated



Team 2






Team 3






Team 4






Team 5






Team 6






Team 7













In the final increment, the SE teams delivered between 6-8 KNCSL (thousand (K) non-comment source lines).  The approximate project-wide defect density was .8 faults/100 NCSL.  However, early in the project, the defect density is thought to have been significantly higher.

When this system testing exercise was run in the Fall of 2002, the highest defect removal efficiency (DRE) recorded was 72%; the other systems that were stable enough to be measured had DREs below 50%.

The DREs are not guaranteed to be completely accurate.  Team 1 was not evaluated because they were the strongest Alpha team with the most comprehensive testware.  A portion of their testware was used to evaluate the other projects.  The process of evaluating the projects at the end of the semester is extremely expensive and shortcuts must be taken.

The real problem with the recorded DRE is that is had not been adjusted for the severity/priority of the faults discovered in CA.  It is our opinion that testers in the Fall of 2003 did a significantly better job of testing than testers in the Fall of 2002.  We summarize it as follows:

·        In the Fall of 2002, seemingly productive Alpha test teams avoid testing complex and unfamiliar parts of the system.  The final CA test easily detected many software faults in the high-priority features of the system. 

·        In the Fall of 2003, most Alpha test teams eventually tested the complex, high-priority features of the system.  In doing so, they almost universally found bugs in these features and reported them accurately enough to get them fixed.  In the final CA test, we found that many SE teams had produced accurate high-priority features such as the PSCR and statistical process control reports, largely due to the Alpha teams' efforts.  

Survey results appear to confirm this conclusion.  Appendix F contains survey results from the students on the Alpha test teams at the completion of the project, before an evaluation of their testing was given.  Note that on a scale of 0-5 with 5 being a full contribution, only 23% of the students in 2002 rated their efforts at improving product quality as a 5.  In 2003, 66% of the students rated their quality improvements efforts as 5; 85% of the students rated their efforts as a 4 or a 5.

We attribute the improvement in testing to several factors, including more stable software and better project management.  However, we also note that in our opinion, students that can recognize Test what you know and Hard work equals progress trends in their own behavior and act to correct it are better testers than those students that cannot do so.

3.7.              Evaluation

The question of whether the technique of testing "live" student projects is an effective way to teach software testing remains open.  We list what we believe to be the strengths and weaknesses of the approach. 

Strengths of the technique include:

·        The technique is very engaging for students (at least for extroverts and active learners).  Some students respond well because something is at stake.  There is a customer for their work.  They interact with the customer directly.  Students strive to learn more about testing because they are trying to contribute to a project, not just complete another academic exercise.

·        The high target environment (i.e., bugs easily found) gives students many chances to practice and improve at writing bug reports.  Feedback on bug reports is immediate in most cases.

·        Abstract concepts such as "context" and "mission" are made concrete.

·        A complex testing project can give student's insights into the testing process that are not visible in drill and practice approaches.  An example of such an insight is that the project allows students to personally assess which testing techniques are useful, scalable, and effective.

Weaknesses of the technique include:

·        It easily overwhelms and confuses students without some software project experience.

·        It is a high risk strategy; it easily degrades into chaos.

·        The overhead in setup, coordination, and management is excessive.

·        The high target environment does not prepare students for testing in cleaner environments.

·        Assessment of student's performance is difficult and infrequent.

3.8.              Probable Improvements

Each execution of the system test exercise offers new chances to improve the course.  In future exercises we are considering the following improvements:

1)      Add reviews.  Reviewing the student's testware seems to be an obvious chance to assess the student's work and provide feedback and direction.  The overhead of the reviews may be a problem.

2)      Drop the course textbook.  While clearly an important, relevant, useful book on software testing, the course would gain many degrees of freedom if the book was discontinued.  Forcing the students to buy the book forces the instructor to cover the material.  While much of the material should be covered, we feel that many other testing ideas, some more suitable to the system test exercise need to be covered. 

3)      Add bug catalogs [12].  Studying bug catalogs will give students more and better testing ideas.

4)      Improve status reporting.  We are not giving student adequate training in reporting the status of testing.  Additional reporting requirements need to be added to the system test exercise.

5)      Teach an incremental testing strategy.  The systematic testing strategy is valid in many testing contexts, but teaching an appropriate incremental strategy would likely improve the testing exercise.

6)      Use multiple professors.  Now that we have a better understanding of how management and mentoring are essential for a successful exercise, it is apparent that having one professor manage multiple SE teams and multiple Alpha teams is inadequate.  Ideally, at least two professors would be involved.  Exactly how to manage the system test exercise remains unclear.  Learning from mistakes is a powerful teaching technique.  It was essential in the success of the latest system testing exercise.  Ironically, if the testing project had been better managed, some of the problematic issues would have been never been encountered, and the students would have missed an important learning opportunity.

3.9.              The Test Design Problem

We believe a significant problem exists in the test design process used in the system testing exercise.  The problem that both the SE and Alpha teams faced in the project was to take a complex system specification and decompose it functionally in order to understand it well enough to produce code or test cases.  The SE and Alpha teams were taught to approach the problem using two different techniques.

The software engineers were taught to approach a complex problem by establishing different levels of abstraction that allowed them to reason and make decisions about the system.  At a minimum, these levels include system-level design, and program-level design.  Each level of abstraction is supported by a well defined set of software development artifacts or models (e.g., UML use case diagrams, class diagrams, interaction diagrams, state charts).  A software development process described how to create the artifacts, how to use the artifacts to record design decisions, and how to progress to the next step in the process. By using different levels of abstraction, each supported by appropriate design artifacts, the students took a complex problem and reduced it into "digestible" chunks that could then be coded.  The code was then integrated together to produce, for the most part, workable systems that met the specifications.

The testers were given the same complex specifications and the task of producing an appropriate set of test cases.  They were taught to approach this complex problem using brainstorming and an outline (test objective and inventories are nothing more than an outline).  The students did not perform this task well, largely we believe, because the technique is inadequate.  The students that did come to understand the system struggled significantly to do so, and we believe that many of the students never did understand the application thoroughly (as evidenced by continued complaints from the SE teams).  We are also not impressed by the test cases produced by this process.  They are separate, disjoint, low-level test cases that never rise above the level of a single functional requirement.  We believe that a contributing factor to the ineffectiveness of some Alpha test teams was an inadequate test design process. 

4       Conclusion

We have presented our experiences in a software test engineering course with two learn by doing exercises.  In the first exercise, TDD was incorporated as an alternative to traditional unit testing.  We believe the exercise to be effective in exposing students to a new testing strategy, and in highlighting the difference between TDD and traditional unit testing.  It seems likely that TDD and unit testing frameworks will be moved into earlier courses, making it less likely that we will use this exercise at this level in the future.

In the system testing exercise we paired students in the testing course with students in an ongoing software engineering course.  Both our impressions and survey results indicate an improvement of this exercise over our previous attempt.  Improvements are thought to come from controlling the complexity of the software application being tested, and through better management and mentoring.  Important lessons on how the behavior of testers can impact the testing process were learned.  The learn by doing approach is found to be popular with students, but the setup, coordination, and management of multiple projects is time consuming for the instructor.

Future work includes improving the way we teach the test design process.  The technique of using brainstorming and outlines has been around for many years [13].  As software has become more complex and our understanding of modeling has improved, it seems appropriate that we look for more powerful test design techniques.  Probable improvements to the test design process will come from the use of model-based testing [14-17], scenario-based testing [18], and testing from use cases [19, 20].


5       References


[1]        J. A. Whittaker, "What is software testing? And why is it so hard?," IEEE SOFTWARE, vol. January/February, no., pp. 70-79, 2000.

[2]        L. Jerinic and V. Devedzic, "The Friendly Intelligent Tutoring Environment," ACM SIGCHI Bulletin, vol. 32, no. 1, pp. 83-94, 2000.

[3]        B. Marick, The Craft of Software Testing. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

[4]        K. Beck, Test-Driven Development: By Example. Boston, MA: Addison-Wesley, 2003.

[5]        J. Bach, "Good Practice Hunting,", 01/26/2004.

[6]        R. D. Craig and S. P. Jaskiel, Systematic Software Testing. Boston: Artech House, 2002.

[7]        A. Cockburn, Agile Software Development. Boston, MA: Addison-Wesley, 2001.

[8]        K. Beck, Extreme Programming Explained: Embrace Change. Boston, MA: Addison-Wesley, 2000.

[9]        L. Crispin and T. House, Testing Extreme Programming. Boston, MA: Addison-Wesley, 2003.

[10]      G. J. Myers, The Art of Software Testing. New York: Wiley, 1979.

[11]      C. Kaner, J. Bach, and B. Pettichord, Leassons Learned in Software Testing: A Context Driven Approach. New York: John Wiley & Sons, Inc., 2002.

[12]      G. Vijayaraghavan and C. Kaner, "Bug taxonomies: Use them to generate better tests,", 1/10/2004.

[13]      B. Hetzel, The Complete Guide to Software Testing, 2nd ed. Wellesley, MA: QED Information Sciences, Inc., 1988.

[14]      I. K. El-Far and J. A. Whittaker, "Model-Based Software Testing," in Encyclopedia of Software Engineering, J. J. Marciniak, Ed., 2nd ed: Wiley, 2001.

[15]      H. Robinson, "Finite State Model-Based Testing On a Shoestring," presented at STAR West, 1999.

[16]      H. Robinson, "Model-based Testing Home Page,", 3-2-2003.

[17]      P. J. Schroeder, E. Kim, J. Arshem, and P. Bolaki, "Combining Behavior and Data Modeling in Automated Test Case Generation," in Proc. of the 3rd Intl. Conf. on Quality Software, Dallas, TX, 2003, pp. 247-254.

[18]      C. Kaner, "An Introduction to Scenario Testing,", 1/10/2004.

[19]      I. Jacobson, M. Christerson, P. Jonsson, and G. Övergaard, Object-Oriented Software Engineering. New York: Addison-Wesley, 1992.

[20]      R. V. Binder, Testing Object-oriented Systems. Reading, MA: Addison-Wesley, 2000.



Appendix A: Course Information


Required Textbook

Systematic Software Testing (Artech House Computer Library)
by Rick D. Craig, Stefan P. Jaskiel
Hardcover: 536 pages
Publisher: Artech House; ISBN: 1580535089; (May 2002)

Course Outline


CS790 Software Testing and Quality Assurance Fall 2003

1)      Introduction (Week 1)

a)      Need for Testing

b)      Definitions of Testing

c)      Testing Terminology

d)      Levels Of Testing

e)      Exhaustive Testing

f)       Testing Paradigms

g)      Testing Context

h)      Overview of SQA

2)       Systematic Test & Evaluation Process (STEP)

a)      Test Process Lifecycle

b)      Prevention vs. Detection

c)      STEP Model

i)        Plan Strategy

ii)       Acquire Testware

iii)     Measure Behavior

d)      Test Plan Template (based on IEEE Std. 829)

e)      Roles in Testing

3)      Risk Analysis (Week 2)

a)      Definition of Risk

b)      Contributors to Risk

c)      Likelihood vs. Impact

d)      Key Activities

i)        Software Risk Analysis

ii)       Planning Risks and Contingencies

4)      Master Test Planning (Week 3)

a)      Levels of Test Planning

b)      Goals of Test Planning

c)      Contents of Test Plans

5)      Software Quality Assurance (Week 4)

a)      Requirements / Use Cases

b)      Reviews, Inspections, and Walkthroughs

c)      Software Configuration Management

d)      Process Definition, Improvement and Auditing

6)      eXtreme Programming (XP) & Test Driven Design (TDD) (Week 5, 6)

a)      Overview of XP

b)      TDD Development Cycle

c)      Unit Testing Frameworks

d)      TDD Process & Refactoring

7)      Test Analysis and Design (Week 7-9)

a)      Test Objectives and Inventories

b)      Tracking Matrices

c)      White-box Testing

i)        Control Flow Graph

ii)       Adequacy Criteria

(1)    Statement

(2)    Branch

(3)    Cyclomatic Number/ Basis Path

(4)    Data Flow

(5)    Mutation

d)      Black-box Testing

i)        Equivalence Partitioning & Boundary Value Analysis

ii)       Combinatorial Testing (n-way)

iii)     Model-based Testing

iv)     Random Testing

v)      Exploratory Testing

8)      Test Implementation (Week 10,11)

a)      Acquiring Test Data

b)      Documenting Test Procedures

c)      Test Environment

9)      Test Execution

a)      Execution Order

b)      Test Logs

c)      Bug Reporting

i)        Writing Effective Defect Reports

ii)       Defect Tracking Tool

d)      When to Stop?

i)        Software Reliability Engineering

ii)       Defect Trend Analysis

e)      Measuring Testing Effectiveness

10)   Test Automation (Weeks 12, 13)

a)      What Should Be Automated?

b)      Testing Tools & Traps

c)      Test Automation Techniques

i)        Capture/Replay

ii)       API Testing

iii)     Scripting Languages

iv)     Test Case and Test Data Generation

11)   Class Presentations / Special Topics (Weeks 14 - 16)


Course Projects and Grading


 Project 1 (Individual)

 TDD Exercise

15 points

 Project 2 (Team)

 System Test Planning and Execution

20 points

 Paper / Presentation (Individual)

 Software Testing / QA Topic

20 points

 Quizzes (6)


25 points

 Final Exam


20 points


Appendix B: Unit Testing Exercise


Homework #1 Part #1
TDD DMAS Sequence Class
Assigned: 10/09/2003
Due: 10/23/2003 by start of class
Accepted until 10/30/2003 - 20% penalty for late work

Assignment Part #1


Using TDD and CppUnit, develop a Sequence class according to the following DMAS (Data Management and Analysis System) specifications.


The Sequence class encapsulates the behavior related to the processing of the LAS sequence file for DMAS.  In this assignment you will implement some of the functionality required of the DMAS Sequence class using TDD.


A DMAS SRS (Software Requirement Specification) and a DMAS Technical Specification contained a description of the LAS sequence file can be found at:


Sequence Class Functionality


The Sequence class must provide the following services to the other DMAS classes.


1) Open, Read, and Validate the LAS sequence file


int ORVSeqFile(CString seqFileName);


The Sequence class must provide the public method ORVSeqFile that take a CString containing an LAS sequence file name as a formal parameter, opens the file, reads its contents into memory and validates the contents of the file.


For example:


Sequence *mySeq;

mySeq = new Sequence;

int nRet = mySeq->ORVSeqFile((CString)“LASData\\Cef4.seq”);


If the file is a valid LAS sequence file, the method must store information from the file in memory.  This information will be accessed later by other methods in the Sequence class. 


This method must return the following values:

0 = LAS Sequence file OK

1 = File not found; cannot open

2 = Invalid file, IDENTIFICATION section not found

3 = Invalid file, PRODINFORMATION section not found

4 = Invalid file, PEAKINFORMATION section not found

5 = Invalid file, SAMPLEDATA section not found


2) The Sequence class must include accessors with the following signatures:


int GetLTID();          // gets LabTech ID from the seq file

CString GetProdID();    // gets Product ID from seq file

CString GetLTName();    // gets LabTech name from the seq file

CString GetTestID();    // gets TestID from the seq file


// Gets the list if peak names from the seq file

void GetPeakNames(CList<CString,CString&>& mPeakNameList);


3)  The Sequence class must provide a method to support the processing of brackets found in the LAS sequence file.


bool GetBracket(bool bBracketYN, int& nCurrSTD, int nEndSTD, CList<int,int>& listSTD, CList<int, int>& listProduct);


The method must return TRUE if a valid bracket has be found, or FALSE if no bracket has been found.


Method parameters:


Bool bBracketYNformal parameter indicating whether the DMAS user has selected the bracketing option (TRUE=yes) or not (FALSE=no). 


Int& nCurrSTDInitially, this integer indicates where in the sample data the DMAS user wishes to begin processing (the caller sets this value to the StartSample selected by the DMAS user).  After finding a valid bracket, the GetBracket method sets nCurrSTD to the sample data line number of the first STD for the next bracket to be processed.  It does this so that subsequent calls to the method correctly find the next bracket to be processed.


int nEndSTDformal parameter indicating the last sample that DMAS should use in processing.  The caller of the method set this value to the StopSample selected by the DMAS user.


CList<int,int>& listSTDthe caller of the method provides a reference to a vector.  This method puts the sample number of all standard samples in this bracket in the vector.


CList<int, int>& listProductthe caller of the method provides a reference to a vector.  This method puts the sample numbers of all product samples in this bracket in the vector.


Intended usage:

Method ORVFile() must be called first to open the LAS sequence file and set the member data structures.  Method GetBracket() can then be called to identify each bracket in the range of samples specified by the DMAS user.


CList<int,int> listSTD;       // list to hold STD information

CList<int,int> listProduct;   // list to hold product information

int nCurrSTD = 1;             // set required start sample

int nEndSTD = 255;            // set required stop sample

bool bBracketYN = true;       // set bracketing to true=yes/false=no



// ** This loop to processes each bracket found in the sample range


while (mySeq->GetBracket(bBracketYN,nCurrSTD,nEndSTD,






            perform outlier, linear regression, etc. for each bracket

                  All accomplished by other DMAS classes.








1) The Sequence class does not access or process LAS raw files (in this assignment).

2) The Sequence class does not validate brackets.  For example, method GetBracket() does not check to see if there are between 3 and 25 STDs in a bracket.

3) When invoked with bBracketYN set to False (i.e., the DMAS user did not select the bracketing option), method GetBracket() returns list of all STDs and all product samples in the sample range on the first call.  On the second call of GetBracket(), the method should return FALSE indicating that no other brackets are found.



1) Get a zip of cppunit-1.9.11 from the course web site.

2) Install / unzip on your machine - Must be at least MS VS 6.0.

3) Open project   ..\cppunit-1.9.11\examples\examples.dsw

4) Make sure HostApp is the current project.

5) Sequence is the DMAS Sequence class

6) SequenceTest is the class used to hold your test cases for the Sequence class.

7) Directories

The Sequence class and SequenceTest class can be found in:





..\cppunit-1.9.11\examples\msvc6\HostApp\LasData is a directory that holds your test data sequences files.

Appendix C: System Testing Project


Software Testing and Quality Assurance
DMAS System Testing Project
Prof. Schroeder
Fall 2003


In this project you will create a system level test plan, write system-level tests to support that plan, execute those tests, and report defects discovered in test execution.  The DMAS software will be delivered to you in increments by your assigned software engineering team.  Your responsibilities include new feature testing, bug fix testing, and regression testing.

To execute this project you must:

1) Create a test plan according to the specification below.

2) Create and execute manual test cases; create and execute automated test cases at the UI and API.

3) Execute test cases on each increment of DMAS that is delivered.

  1. Create a test log (see Appendix B, below) for each and every test session you execute.  At the end of the session email the test log to and to your software engineering team.
  2. Enter bug reports for any defects you discover in testing using the Mantis bug tracker (

At the end of the project, you must turn in your test plan and all test information and demo your test automation for Prof. Schroeder.  Note that part of your grade for this project will be determined by the software engineering team you are working with.


Important Dates: 

Incremental testing:       10/28 - 12/03

Code Freeze:                 12/04

Final Testing:                 12/04 - 12/11

Automation Demos        12/04 - 12/11

Final Test Plan Due       12/11 hardcopy due at start of class (7:00 PM)

Late Work Accepted until 12/16 at 12:00 PM (20% penalty for late work).


DMAS System Test Plan Template

(based on ANSI/IEEE Standard 829-1983)  Craig & Jaskiel p. 61
Software Testing and Quality Assurance

v0.00  10/28/2003 pjs Initial version.


ANSI/IEEE Standard 829-1983 describes a test plan as:

“A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.”


This is a simplified test plan template.  Your final test plan should contain the following sections:

1. Test Plan Identifier:

§         A unique identifier and version number.

2. Introduction

§         Overview of system to be tested

§         Scope of the test plan

§         References to related documents (description of files, algorithms, reports, etc.)

3. Test Items

§         Test items and their version (code, documentation, specifications)

§         Items which are specifically not going to be tested (optional)

4. Approach

§         Overall approach to testing

§         Specify major activities, techniques, and tools used to conduct testing

§         Specify a minimum degree of comprehensiveness required

§         Specify techniques which are to be used to trace requirements

5. Software Risk Issues

§         Software Risk analysis


Appendix A.  Test Package

1)      Inventory Tracking Matrix: Map test cases to test objectives and inventory items.

2)      Simple Test Case Specification: For all but very complex tests, specify test cases in spreadsheet format (Craig and Jaskiel, p. 191).
 Include: test case ID, test notes, input test data, and expected results.


Appendix B.  Test Logs

The test log is a record of all test executed on the product during a testing session.  Test logs include the time, date, test ID executed, version of the software, and pass/fail status. Attach a copy of all test logs in this section.


Appendix C.  Test Incident Reports and Resolution

A summary report listing all bug reports you generated during the project goes here.  This can be generated from the Mantis system. 

Appendix D: Software Delivery Schedule


CS536-2 Introduction to Software Engineering
DMAS Project Schedule with Feature Content

Revision History

V0.00   10/07/2003        pjs        Initial release

V0.01   10/30/2003        pjs        Removed feature for SPC write-to-file format; all SPC reports will go directly to display.



Alpha Test Team - Students in CS790 assigned to test the DMAS project.

CRUD - Create, Read, Update, Delete

DWL - Digital Work List

LCL - Lower Control Limit

LM - Lab Manager

LT - Lab Tech

PSCR - Product Sample Concentration Report

SPC - Statistical Process Control

UCL - Upper Control Limit

UI - User Interface

WLI - Work List Item


DMAS Project Schedule  


10/14    Software configuration management (RCS) in place

10/17    Requirements document review with Alpha test team complete


10/09 – 10/27 Development Increment 1

Feature Content:

LM00.00 LM logon


LM02.01 DWL link to display existing display format PSCR (LM and LT)

LT00.00 LT logon

LT01.00 LT RU (read update) WLIs in the DWL


10/28 – 11/10 Development Increment 2

Feature Content:

LM02.02 DWL link to display PSCR in write-to-file format (LM and LT)

LM03.00 SPC UI, product ID and component selection with write to display

LM04.00 SPC UI, all selection options with write to display

LT02.00 Post result to ResultDB from PSCR write-to-file format

LT03.00 PSCR UI with write-to-file

LT04.00 PSCR UI with write-to-file and outlier test

LT06.00 PSCR pretty print from write-to-file format

MU01.00 Multi-users access to the DWL


11/11 – 11/24 Development Increment 3

Feature Content:

LM05.00 SPC UI, all selection options, and SPC chart tests with write-to-file format

LM07.00 SPC analysis with exclude/un-exclude data points

LM08.00 LM retires product, and/or batch, for all components.

LT05.00 PSCR UI with write-to-file, outlier test, and bracketing of standards.

MU01.00 Multi-users access to the DWL

MU02.00 Multi-user access to the ResultsDB


11/25 - 12/04 Fix Package

12/04 Code freeze

12/04 -12/11 Final Alpha team testing, final customer acceptance testing


Appendix E: Test Driven Design Survey



Appendix F: System Testing Project Survey