Page images
PDF
EPUB

a.

b.

Identify the problem as a particular case of a problem whose solution is known.

Solve a specific instance of the problem in the hope that it will lead to the general solution. Solution of this new problem proceeds as above.

Its

Computer programming is a form of problem solving. solution paradigm is called the development life cycle and is organized into stages. There are many life cycle charts in the literature, [NBS76] [DOD77] but no one one current chart is appropriate for all views of the development process. The one given in Figure 1 is representative. During the requirements stage, the problem to be solved is carefully defined. The design stage is when general solutions are hypothesized and data and process structures are organized. During construction the program modules are coded and debugged. The modules are integrated and the interfaces debugged. Testing begins as the program is exercised over test data selected during this and earlier stages. The program is used and maintained during the final stages.

a

Some significant differences are apparent between the problem solution paradigm described in the first paragraphs and the development life cycle described above. The life cycle activity is expressed as a straight line solution whereas the previous paradigm emphasized iteration toward solution. Anyone who has ever programmed knows that iteration is essential. Iteration is the result of the verification process and error assessment. Unless the correct prob1em is stated and the correct solution achieved at the first try, modification and iteration occur.

the

Verification should accompany each stage of development life cycle. If the verification process is isolated in a single stage then problem statement errors or design errors discovered at that stage may exact an exorbitant price. Not only must the original error be corrected but the structure built upon it must be changed also.

Viewing each development stage as a sub-problem leads to a more productive paradigm for program development. The amended life cycle chart in Table 1 presents the verification activities that accompany each development stage.

[blocks in formation]

: Construction

Determine Correctness and Consistency

: Generate Structural and Functional

Test Data

:

: Determine Correctness and Consistency : : Generate Structural and Functional

:

[blocks in formation]

1. Determine the correctness and consistency of the structures produced at the stage, and

2. Generate test data based upon structures troduced at that stage.

For the design and construction stages it is also

in

necessary

to:

3. Determine that the structures are consistent with those at the previous stage, and

lier.

4. Refine and redefine test sets generated ear

Performing the above activities at each development stage should help to locate errors when they are introduced and will also partition the test set construction.

3. Testing

Great strides have been made toward the development of formal verification techniques. These techniques, based on ideas of formal semantics and proof techniques, while being promising research avenues are not easily applied without supporting tools (verifiers). Currently automated verifiers are expensive, not widely available, and limited in application. For the single programmer, testing is the most easily applied verification technique. Testing is, as we will discuss, limited in its ability to demonstrate correctness. Testing shows the presence of errors, and generally (excluding exhaustive testing) cannot demonstrate the absence of errors.

One view of a program is as a representation of a function taking elements from one set (called the domain) and transforming them into elements of another set (called the range). The testing process is then used to ensure that the implementation (the program) faithfully realizes the function. Since programs are frequently given inputs that are not in the domain, they should also act reasonably on such elements. Thus a program which realizes the function 1/x should not fail catastrophically when x=0, but instead should generate an error message. We call elements within the functional domain valid input s and those outside the domain invalid inputs.

The goal of testing is to reveal errors not removed during debugging. The testing process consists of obtaining a valid value from within the functional domain or an invalid value from outside the functional domain, determining the expected (correct) value, running the program on given value, observing the program's behavior, and finally comparing that behavior with the expected (correct) behavior.

the

If the comparison is successful, the result of the testing process has revealed no errors. If the comparison is unsuccessful, then through the testing process the errors are revealed.

The key words in the paragraph above are expected (correct) behavior, observation, and comparison. An important phase of testing lies in planning how to apply test data, how to observe the results, and how to compare the results with desired behavior. Applying and observing tests are not always straightforward activities. Often extensive

pro

analysis is required to determine tests which adequately test design components, and often code must and often code must be instrumented to provide observation. Determining the desired (correct) behavior for comparison with observed results is very difficult. To test a program, we must have an "oracle" to vide the correct responses and. to represent the desired behavior. This is a major role of a requirements specification. By providing a complete description of how the system is to respond to its environment, a good requirements specification may form a basis for constructing such an oracle. In the future, executable requirements languages may provide this capability directly, but currently they are still basic research topics, consequently we must be content with more ad hoc techniques. Some typical ones include:

1. Intuition.

2. Hand calculation.

3. Simulation, both manual and automated.

4. An alternate solution to the same problem.

Although we have been discussing testing through examples at the coding level, our intent is to be general enough so that the discussion of the validation methods applies to any stage in the program's life cycle. Testing in its narrowe st definition is performed during the construction stage, and then later during operation and maintenance as revisions are made. Derivation of test data and test planning, however, are activities which should cover the entire

life cycle.

If a broader meaning of testing is used, subsuming more of the verification process, then testing is an important activity in each life cycle stage. Simplified walkthroughs, code reading, and most forms of requirements and design analysis can be thought of as testing procedures. Each of these will be discussed in later sections.

The main problem with testing is that it reveals only the presence of errors. A complete validation of a program can be obtained only by testing for every element of the domain. Since this process is exhaustive, finding the presence of no errors guarantees the validity of the program. This technique is the only dynamic analysis technique with this guarantee. Unfortunately, it is not practical. Frequently program domains are infinite or so large as to make the

testing of each element of the domain infeasible. There is also the problem of deriving, for an exhaustive input set, the expected (correct) responses. Such a task is at

least as difficult as ( and possibly equivalent to) writing the program itself. The goal of a testing methodology is to reduce the potentially infinite exhaustive testing process to a finite testing process. This is done by choosing representative elements to exercise features of the problem under solution or of the program written to solve the prob1em.

A subset of the domain used in a testing process is called a test data set. Thus the crux of the testing problem is finding an adequate test data set, one that covers the domain and yet is small enough to use. This activity must be planned and carried out in each life cycle stage. Sample criteria for the selection of test data for test sets include:

1. The test data should reflect special properties of the domain such as extremal or ordering properties or singularities.

2. The test data should reflect special properties of the function that the program is supposed to implement such as domain values leading to extremal function values.

3. The test data should "exercise" the program in a specific manner, e.g., causing all branches to

be executed or all statements to be executed.

The properties that the test data sets are to reflect are classified according to whether they depend upon the program 's internal structure or the function the program

to

is

perform. In the first two cases above, the test data reflect functional properties and in the latter case structural properties. Structural testing helps to compensate for the inability to do exhaustive functional testing.

While criteria for a test set to be adequate in a structural sense are often simple to state (such as branch coverage), the satisfaction of those criteria can usually only be determined by measurement. Due to the lack of analytical methods for deriving test data to satisfy structural criteria, most structural test sets are obtained using

heuristics.

For functional analysis techniques, the major current difficulties are in the specification of such vague terms as extremal exceptional value. Further, for some functional analysis techniques, it may not be possible to obtain functional description from a requirement or specification statement. Thus once again a substantial amount of effort in

« PreviousContinue »