Software components have defects, no matter how well our defect prevention activities are implemented. Developers cannot prevent/eliminate all defects during development. Therefore, software must be tested before
it is delivered to users. It is the responsibility of the testers to design tests that (i) reveal defects, and (ii) can be used to evaluate software performance, usabilty, and reliability. To achieve these goals, testers must select
a finite number of test cases, often from a very large execution domain.
Unfortunately, testing is usually performed under budget and time constraints. Testers often are subject to enormous pressures from management and marketing because testing is not well planned, and expectations
are unrealistic. The smart tester must plan for testing, select the test cases, and monitor the process to insure that the resources and time allocatedfor the job are utilized effectively. These are formidable tasks, and to carry
them out effectively testers need proper education and training and the ability to enlist management support.
Novice testers, taking their responsibilities seriously, might try to test a module or component using all possible inputs and exercise all possible software structures. Using this approach, they reason, will enable them
to detect all defects. However an informed and educated tester knows that is not a realistic or economically feasible goal. Another approach might be for the tester to select test inputs at random, hoping that these
tests will reveal critical defects. Some testing experts believe that randomly generated test inputs have a poor performance record [1]. Others disagree, for example, Duran [2]. Additional discussions are found in Chen [3],
and Gutjahr [4].