Page images
PDF
EPUB

3.2

Special Issues Raised By The Standard Requirements

3.2.1

Implementation-defined Features

At several points in the standard, processors are gi'ven a choice about how to implement certain features. These subjects of choice are listed in Appendix C of the standard. In order to conform, implementations must be accompanied by documentation describing their treatment of these features (see section 1.4.2 (7) of the standard). Many of these choices, especially those concerning numeric precision, string and numeric overflow, and uninitialized variables, can have a marked effect on the result of executing even standard programs. A given program, for instance, might execute without exceptions on one standard implementation, and cause overflow on another, with a notably different numeric result. The

programs that test features in these areas call for especially careful interpretation by the user.

[ocr errors]

Another class of implementation-defined features is that associated with language enhancements. If an implementation executes non-standard programs, it also must document the meaning it assigns to the non-standard constructions within them. For instance, if an implementation allows comparison of strings with a less-than operator, it must document its interpretation of this comparison.

[blocks in formation]

The standard for BASIC, in view of its intended user base of beginning and casual programmers, attempts to specify what a conforming processor must do when confronted with non-standard circumstances. There are two ways in which this can happen: 1) a program submitted to the processor might not conform to the standard syntactic rules, or 2) the executing program might attempt some operation for which there is no reasonable semantic interpretation, e.8.,

division by zero, assignment to a subscripted variable outside of the array. In the BASIC standard, the first case is called an error, and the second an exception, and in order to conform, a processor must take certain actions upon encountering either sort of anomaly.

a

Given
program with

a syntactically non-standard construction the processor must either reject the program with a message to the user noting the reason for rejection,

if it accepts the program, it must be accompanied by documentation which describes the interpretation of the construction.

or,

If a condition defined as an exception arises in the course of execution, the processor

is obliged, first to report the exception, and then to do one of two things, depending

the type of exception : either it must apply a so-called recovery procedure and continue execution, or it must terminate execution.

on

Note that it

is

the user, not the program, who must determine whether there has been an adequate error or exception report, or whether appropriate documentation exists. The pseudo-code in Figure 1 describes how conforming implementations must treat errors. It may be thought of as an algorithm which the user (not the programs) must execute in order to interpret correctly the effect of submitting a test program to an implementation.

The procedure for error handling in Figure 1 speaks of a processor accepting or rejecting a program. The glossary (sec. 19) of the standard defines accept as "to acknowledge as being valid". A processor, then, is said to reject a program if it in some way signifies to the user that an invalid construction (and not just an exception) has been found, whenever it encounters the presumably non-standard construction, or if the processor simply fails to execute the program at all. A processor implicitly accepts a program if the processor encounters all constructions within the

program with no indication to the user that the program contains constructions ruled out by the standard or the implementation's documentation.

In like manner,

we

can construct pseudo-code operating instructions to the user, which describe how to determine whether an exception has been handled in conformance with the standard and this is shown also in Figure 1.

As a point of clarification, it should be understood that these categories of error and exception apply to all implementations, both compilers and interpreters, even though they are more easily understood in terms of a compiler, which first does all the syntax checking and then all the execution, than of an interpreter. There is no requirement, for instance, that error reports precede exception reports. It is the content, rather than the timing, of the message that the standard implies. Messages to reject errors should stress the fact of 111-formed source code. Exception reports should note the conditions, such as data values or flow of control, that are abnormal, without implying that the source code per se is invalid.

[ocr errors]

Error Handling

if program is standard
if program accepted by processor
if correct results and behavior

processor PASSES
else

processor FAILS (incorrect interpretation) end if else

processor FAILS (rejects standard program)
end if
else (program non-standard)
if program accepted by processor
if non-standard feature correctly documented

processor PASSES
else
processor FAILS (incorrect/missing documentation

for non-standard feature) #
endif
else (non-standard program rejected)
if appropriate error message

processor PASSES else

processor FAILS (did not report reason for rejection) end if endir endif

* note that all implementation-defined documented

(See

Appendix C in the ANSI non-standard features:

[blocks in formation]

Exception Handling

if processor reports exception
if procedure is specified for exception

and host system capable of procedure
if processor follows specified procedure

processor PASSES else

processor FAILS (recovery procedure not followed) endif else (no procedure specified or unable to handle) if processor terminates program

processor PASSES else

processor FAILS (non-termination on fatal exception) end if endir else

processor FAILS (fail to report exception) endif

Figure 1

4 STRUCTURE OF THE TEST SYSTEM

[ocr errors]

The design of the test programs is an attempt to harmonize several disparate goals: 1) exercise all the individual parts of the standard, 2) test combinations of features where it seems likely that the interaction of these features is vulnerable to incorrect implementation, 3) minimize the

of tests, 4) make the tests easy to use and their results easy to interpret, and 5) give the user helpful information about the implementation even, if possible, in the case of failure of a test. The rest of this section describes the strategy we ultimately adopted, and its relationship to conformance and to interpretation by the user of the programs.

[ocr errors]

4.1

Testing Features Before Using Them

[ocr errors]

Perhaps the most difficult problem of design is to find some organizing principle which suggests a natural sequence to the programs. In many ways, the most natural and simple approach is simply to test the language features in the order they appear in the standard itself. The major problem with this strategy is that the tests must then use untested features in order to exercise the features of immediate interest. This raises the possibility that the feature ostensibly being tested might wrongly pass the test because of a flaw in the implementation of the feature whose validity

is

implicitly being assumed. Furthermore, when a test does report a failure, it is not clear whether the true cause of the failure was the feature under test or one of the untested features being used.

These considerations seemed compelling enough that

we decided to order the tests according to the principle of testing features before using them. This approach is not without its own problems, however. First and most importantly, it destroys any simple correspondence between the tests and sections of the standard. The testing of a given section may well be scattered throughout the entire test sequence and it is not a trivial task to identify just those tests whose results pertain to the section of interest. To ameliorate this problem, we have been careful to note at the beginning of each test just which sections of the standard it applies to, and have compiled a cross-reference listing (see section 6.3), so that you may quickly find the tests relevant to a particular section. A second problem is that occasionally the programming of a test becomes artificially awkward because the language feature appropriate for

certain task hasn't been tested yet. While the programs generally abide by the test-before-use rule, there are some cases in which the price

in programming efficiency and convenience is simply too high and therefore a few of the

programs

do employ untested features. When this happens, however, the program always generates a message telling you which un tested feature it is depending on. Furthermore, we were careful to use the untested

a

feature in a simple way unlikely to interact with the feature under test so as to mask errors in its own implementation.

4.2

Hierarchical Organization Of The Tests

Within the constraints imposed by the test-before-use

- rule, we tried to group together functionally related tests. This grouping should also help you interpret the tests better since you can usually concentrate on one part of the standard at a time, even if the parts themselves are not in order. Section 6.1 of this manual contains a summary of the hierarchical group structure. It relates a functional subject to a sequential range of tests and also to the corresponding sections of the standard. We strongly recommend that you read the relevant sections of standard carefully before running the tests in a particular group. The documentation contained herein explains the rationale for the tests

each group, but it is not a substitute for a detailed understanding of the standard itself.

[ocr errors]

O

in

by

from

Many of the individual test programs are themselves further broken down into So-called sections. Thus the overall hierarchical subdivision scheme is given

largest to smallest: system, groups, sub-groups, programs, sections. Program sections are further discussed below under: 4.4.3 Documentation.

[ocr errors]
[blocks in formation]

The test programs are oriented towards executing in

an interactive environment, but generally can be run in batch mode as well. Some of the programs do require input, however, and these present more of a problem, since the input needed often depends on the immediately preceding output of the program. See the sample output in Volume 2 for help in setting up data files if you plan to run all the programs non-interactively. The programs which use the INPUT statement are 73, 81, 84, 107-113,

, and 203.

.

[ocr errors]
[ocr errors]

We have tried to keep the storage

required for execution within reasonable bounds. Array sizes are as

as small as possible, consistent with adequate testing. No program exceeds 300 lines in

length. The programs print many informative messages which may be changed without affecting the outcome of the tests. If your implementation cannot handle a program because of its size, you should set up a temporary copy of the program with the informative messages cut down to a minimum and use that version. Be careful not to omit printing which is a substantive

part of the test itself.

« PreviousContinue »