5.12 Implementation supplied Functions All conforming implementations must make available to the programmer the set of functions defined in section 8 of the ANSI standard. The purpose of this group is to assure that these functions have actually been implemented and also to measure at least roughly the quality of implementation. These three functions are distinguished among the eleven supplied functions in that any reasonable implementation should return a precise value for them. Therefore they can be tested in a more stringent manner than the other eight which are inherently approximate (i.e. a discrete machine cannot possibly supply an exact answer for most arguments). The structure of the tests is simple : the function under test is invoked with a variety of argument values and the returned value is compared to the correct result. If all results are equal, the test passes, otherwise it fails. The values are displayed for your inspection and the tests are self-checking. The test for the INT function has a second section which does an informative test on the values returned for large arguments requiring more than six digits of accuracy. 5.12.2 Approximated Functions: SQR, ATN,COS, EXP, LOG, SIN, TAN to These functions do not typically return rational values for rational arguments and thus may only be approximated by digital computers. Furthermore, the standard explicitly disavows any criterion of accuracy, making it difficult say when an implementation has definitely failed a test. Because of these constraints, the non-exception tests in this group are informative only. We can, however, quite easily apply the ideas developed earlier in section 5.6.4. As explained there, we can devise an accuracy criterion for the implementation of a function, based on a hypothetical six decimal digit machine. If a function returns a value less accurate even than that of which this worst-case machine is capable, the informative test fails. To repeat the earlier guidance for the numeric operations: this approach imposes only a very minimal requirement. well want to set a stricter standard for the implementation under test. For this reason, the programs in this group also compute and report an error measure , which gives an estimate of the degree of accuracy achieved, again relative to a six-digit machine. The error measure thus goes beyond a simple pass/fail report and quantifies how well or poorly the function value was computed. Of course, the error measure itself is subject to inaccuracy in its own internal computation, and no one measurement should be taken as precisely correct. Nonetheless, when the error measures of all the cases are considered in the aggregate, it should give a good overall picture of the quality of function evaluation. Since it is based on the same allowed interval for values as the pass/fail criterion, it too measures the quality of function evaluation independent of the function and argument under test. It does depend on the internal accuracy with which the implementation can represent numeric quantities: the greater the accuracy, the smaller the error measure should become. AS a rough guide, the error measures should all be < 10^(6-0), where d is the number of significant decimal digits supported by the implementation (this is determined in the standard tests for numeric operations, group 6.1). For instance, an eight decimal digit processor should have all error measures < .01. Another point to be stressed : even though the results of these tests are informative, the tests themselves are syntactically standard, and thus must be accepted and processed by the implementation. If, for instance, the processor does not recognize the ATN function and rejects the program, it definitely fails to conform to the standard. This is in contrast to the case of a processor which accepts the program, but returns somewhat inaccurate values. The latter processor is arguably standard-conforming, even if of low quality. This also contains exception tests for those conditions so specified in the ANSI standard. Most of these can be understood in light of the general guidance given for exceptions. The program for overflow of the TAN function deserves some comment. Since it is questionable whether overflow can forced simply by encoding pi/2 as a numeric constant for the source code argument, the program attempts to generate the exception by a convergence algorithm. It may be, however, that no argument exists which will cause overflow, so you must verify merely that if overflow occurs, then it is reported as an exception. For instance, if several of the function calls return machine infinity, it is clear that overflow has occurred and if there were no exception report in such a case, the test fails. Also, as a measure of quality, the returned values with a given sign should increase in magnitude until overflow occurs, i.e. all the positive values should form an ascending sequence, and the negative values a descending sequence. Unlike the other functions, there is no single correct value to be returned by any individual reference to RND, but only the properties of an aggregation of returned values are specified. The standard says that these values are "uniformly distributed in the range 0 <: RND < 1". Also, section 17 specifies that in the absence of the RANDOMIZE statement, RND will generate the same pseudorandom sequence for each execution of a program; RND. conversely, each execution of RANDOMIZE "generates а new unpredictable starting point for the sequence produced by The RND tests follow closely the strategy put forth in chapter 3.3.1 of Knuth's The Art of Computer Programming [4], which explains fully the rationale for the programs in this group. as the The first two programs test that the same sequence or a novel sequence appear appropriate, depending on whether RANDOMIZE has executed. Note that you must execute both of these programs three times apiece, since the RND sequence is initialized by the implementation only when execution begins. The next three programs all test properties of the sequence which follow directly from specification that it is uniformly distributed in the range 0 <: RND < 1. If the results make it quite improbable that the distribution is uniform, or if any value returned is outside the legal range, then the test fails. Of course, any implementation could pass simply by adjusting the RND algorithm or starting point until a passing sequence is generated. In order to measure the quality of implementation, you can run the programs with a RANDOMIZE statement in the beginning and then observe how often the test passes or fails. Note that, if you use RANDOMIZE, these programs should fail a certain proportion of the time since they are probabilistic tests. There are several desirable properties of a sequence of pseudorandom numbers which are not strictly implied by uniform distribution. If, for instance, the numbers in the sequence alternated between being <: .5 and > .5, they might still be uniform, but would be non-random in an important way. These tests attempt to measure how well the implementation has approached the ideal of a perfectly random sequence by looking for patterns indicative of nonrandomness in the sequence actually produced. Like the tests for standard capabilities, these programs probabilistic and any one of them may fail without necessarily implying that the RND sequence is not random. random. If a high quality RND function is important for your purposes, we suggest you run each of these programs several times with the RANDOMIZE statement. If a given test seems to fail far more often than likely, it may well indicate a weakness in the RND algorithm. are The tests in this group all use an argument-list which is incorrect in some way, either for the particular function, or because of the general rules of syntax. As always, if the processor does accept any of them, the documentation must be consistent with the actual results. Note that the ANSI standard contains a misprint, indicating that the TAN function takes no arguments. The tests are written to treat TAN as a function of a single variable. The standard provides a facility SO that programmers can define functions of a single variable in the form of a numeric expression. This group of tests exercises both the invoking mechanism (function references) and the defining mechanism (DEF statement). These programs test a variety of properties guaranteed by the standard: the DEF statement must allow any numeric expression as the function definition; the parameter, if any, must not be confused with a global variable of the same name; global variables, other than one with the same name as the parameter, are available to the function definition; a DEF statement in the path of execution has no effect; invocation of a function as such never changes the value of any variable; the set of valid names for user-defined functions is "EN" followed by any alphabetic character. The tests are self-checking. As with the numeric operations, a very loose criterion of accuracy is used to check the implementation. Its purpose is not to check accuracy as such, but only to assure that the semantic behavior accords with the standard. Many of these tests are similar to the error tests for implementation supplied functions, in that they try out various malformed argument lists. There are also some tests involving the DEF statement, in particular for the requirements that a program contain exactly one DEF statement for each user function referred to in the program and that the definition precede any references. Numeric expressions have a somewhat special place in the Minimal BASIC standard. They are the most complex entity, syntactically, for two reasons. First, the expression itself may be built up in a variety of ways. Numeric constants, variables, and function references are combined using any of five operations. The function references themselves may be to user-defined expressions. And of course expressions can be nested, either implicitly, or explicitly with parentheses. Second, not only do the expressions have a complex internal syntax, but also they may appear in a number of quite different contexts. Not just the LET statement, but also the IF, PRINT, ON... GOTO, and FOR statements, can contain expressions. Also they may be used as array subscripts or as arguments in a function reference. Note that when they are in the ON... GOTO, as subscripts, or as arguments to expressions must be rounded to the nearest integer. used TAB, The overall strategy of the test system is first to assure that the elements of numeric expressions are handled correctly, then to try out increasingly complex expressions in the comparatively simple context of the LET statement, and finally to verify that these complex expressions work properly in the other contexts mentioned. Preceding groups have already accomplished the first task of checking out individual expression elements, such as constants, variables (both simple and array), , and function references. This group completes the latter two steps. 5.14.1 Standard Capabilities In context of LET-Statement is This test tries out various lengthy expressions, using the full generality allowed by the standard, and assigns the resulting value to a variable. As usual, if this value even approximately correct, the test passes, since we are interested in semantics rather than accuracy. The program displays the correct value and actual computed value. This test also verifies that subscript expressions evaluate to the nearest integer. 5.14.2 Expressions in other contexts: PRINT, IF, ON-GOTO, FOR Please note that the PRINT test, like other PRINT tests, is inherently incapable of checking itself, and therefore you must inspect and interpret the results. The PRINT program first tests the use of expressions as print-items. Check that the actual and correct values are reasonably close. The second section of the program tests that the TAB call is handled correctly. Simply verify that the characters appear in the appropriate columns. |