Page images
PDF
EPUB

Eventhough our procedure makes no pretenses to anything but empirical fitting, this set of data provides us with an opportunity to examine the agreement between a physical theory (the Lorentz-Lorenz relation) and a set of experimental data.

The first three ✪ values of the SVD of table 6 are:

0,1.92330513; 02 0.00009801; 0, 0.00002689

=

03 =

Note the very large drop from 0, to 82, indicating that one multiplative term in the SVD should represent the data quite well. More exactly, we find:

[blocks in formation]

Thus, one single multiplicative term reproduces the data of table 6 to about 3 units in the 5th place. It is easily verified that addition of a second multiplicative term fails to significantly improve this fit. The precision of a measurement of n in this study is no better than 1 to 3 units in the fifth place [18]. Applying the law of propagation of errors, it is easily seen that the same statement holds for the quantity (n2−1)/(n2+2). We now have the model:

[blocks in formation]

where u is a function of pressure only, and v, a function of wavelength only. Thus, eq (8) is equivalent to eq (7), as required by the Lorentz-Lorenz theory.

The fit of u, as a function of pressure and v, as a function of wavelength can be accomplished by the fourparameter curve. Table 7 lists the parameters of the two curves as well as the fitted values, using eq (8). A comparison of tables 6 and 7 confirms the satisfactory quality of the fit.

[blocks in formation]

By combining the Singular Value Decomposition technique with the curve fitting procedures developed in part I, it is possible to obtain excellent empirical fits for many sets of data in which the dependent

(response) variable is displayed as a two-way table and the rows and columns represent levels of the two independent (regressor) variables, respectively.

The procedure consists in performing an SVD on the matrix of values of the response variable and then fitting the vectors of parameters, which are functions of the rows or of the columns, but not of both, to the corresponding regressor variables.

6. References

[1] Harter, H. L., Tables of Range and Studentized Range, Annals of Mathematical Statistics, 31, No. 4, 1122-1147, 1960.

[2] Rao, C. R., Linear Statistical Inference and Its Applications, (John Wiley & Sons, N.Y., 1973).

[3] Hotelling, H., Analysis of a Complex of Statistical Variables into Principal Components, J. of Educational Psychology 24, 417-441, 498-520 (1933).

[4] Church, A., Analysis of Data when the Response is a Curve, Technometrics 8, 229-246 (1966).

[5] Gollob, H. F., A Statistical Model which combines Features of Factor Analytic and Analysis of Variance Techniques, Psychometrika 33, 73-116 (1968).

[6] Jolicoeur, Pierre and J. E. Mosimann, Size and Shape Variation in the Painted Turtle. A Principal Component Analysis, Growth 24, 339-354 (1960).

[7] Mandel, John, The Partitioning of Interaction in Analysis of Variance, J. of Research of the National Bureau of Standards-B. Mathematical Sciences 73B, 309-328 (1969).

[8] Mandel, John, Distribution of Eigenvalues of Covariance Matrices of Residuals in Analysis of Variance, J. of Research of the National Bureau of Standards-B. Mathematical Sciences 73B, 149-154 (1970).

[9] Mandel, John, A New Analysis of Variance Model for Non-Additive Data, Technometrics 13, 1-18 (1971).

[10] Mandel, John, Principal Components, Analysis of Variance and Data Structure, Statistica Neerlandica 26, 119-129 (1972).

[11] Simonds, J. L., Application of Characteristic Vector Analysis to Photographic and Optical Response Data, J. Opt. Soc. Am. 53, 968-974 (1963).

[12] Snee, R. D., On the Analysis of Response Curve Data, Technometrics 14, 47-62 (1972).

[13] Wernimont, Grant, Evaluating Laboratory Performance of Spectrophotometers, Anal. Chem. 39, 554-562 (1967).

[14] Garbow, B. J., et al., Matrix Eigensystem Routines-EISPACK Guide Extension. Lecture Notes in Computer Science, Vol. 51. Springer-Verlag, New York/Heidelberg/Berlin, 1977.

[15] Smith, B. T., et al., Matrix Eigensystem Routines-EISPACK Guide, Second Edition. Lecture Notes in Computer Science, Vol. 6. Springer-Verlag, New York/Heidelberg/Berlin, 1976.

[16] Sparks, D. N. and Todd, A. D., Algorithm AS 60: Latent Roots and Vectors of a Symmetric Matrix. Applied Statistics, Vol. 22 (1973), pp. 260-265. Corrigendum: Applied Statistics, Vol. 23 (1974), pp. 101-102.

[17] Wilkinson, J. H. and Reinsch, C., editors, Handbook for Automatic Computation, Volume II, Linear Algebra (Springer-Verlag, New York/Heidelberg/Berlin, 1971).

[18] Waxler, R. M., Weir, C. E., and H. W. Schamp, Jr., Effect of Pressure and Temperature Upon the Optical Dispersion of Benzene, Carbon Tetrachloride and Water, J. Res. Nat'l. Bureau of Standards-A. Physics and Chemistry, 684, No. 5, 489-498, 1964.

Part III: Fitting Functions of Three

or More Arguments

1. Introduction

The first two papers in this series (Parts I and II) dealt with ordinary curve and surface fitting, i.e., with the fitting of functions of one or two arguments. In the latter case, it was assumed that the data were in the form of a two-way table with no cells missing. Similarly, we will assume in this paper, that each value of the function to be fitted is associated with a combination of the levels of three or more arguments, all combinations being present, and each one being associated with a single value of the function. In other words, we assume a "complete factorial" with no replications per cell. Of course, if one or more cells contain more than a single observation, one can substitute the average for these replicates. For purposes of empirical fitting, this should be quite acceptable, provided the precision of the single observations is satisfactory.

We present the method in terms of a single example, a function of three arguments. Generalization to functions of more than three arguments should be self-evident. However, the method may become cumbersome, and is not recommended as a first choice in these cases.

2. Illustration: Fitting the F table

Table 1 is a portion of the table of critical values of the F distribution for the levels of significance P, of

25, 10, 5, and 1 percent, and for degrees of freedom, both in the numerator and in the denominator, of 4, 6,

60, 120, and ∞. The table, taken from ref. [1], has 100 "observations," but covers an infinite range of both
sets of degrees of freedom, v1 and v2. We fully intend the empirical fit to be acceptable over this doubly-
infinite range, and for all values of P between 1 and 25 percent.

[merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

4. Details of the fitting process

1. First step: SVD of 25 × 4 table.

The 25 rows are the combinations of the five levels of v, and the five levels of v2; the four columns repre-
sent the four levels of the factor P(see table 1).

[blocks in formation]

Thus, the fit will be good to approximately 3 units in the third place, provided that all the eigenvectors are fitted to an equivalent degree of approximation.

2. Second step: Fitting the v vectors

All v vectors are functions of a single variable, P, as shown in table 2. They are readily fitted by the methods of Part 1, with the results shown in Table 3.

[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

3. Third Step: SVD of the u vectors

Each u vector is a function of v, and v2 as shown by tables 4, 5, and 6.

To avoid confusion, we will denote the eigenvectors resulting from the SVD of each u vector by the symbols A, and B, for u1, C, and D for u2, and E, and G for us. We find that, to obtain sufficient precision, the SVD for u1, requires three terms, while for u2 and u, two terms suffice; thus:2

k

[ocr errors][subsumed][ocr errors][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

4. Fourth Step: Fitting the vectors A, B, C, D, E, and G. The vectors A, C, and E are functions of v, only, while B, D, and G are functions of v2 only, as shown in table 7. Again, we use the methods of Part I to fit these vectors to their corresponding arguments, with the results shown in table 8.

5. Fifth step: Fit of F as a function of P, v1, and v2.

By substituting for u1, u2, and us in eq. (1), their expressions as given by eqs (6), (7), and (8), one readily obtains an expression for y, as a function of quantities that are either constants (the 0 and the T), or functions of a single argument (P, v1, and v2). Since the latter have all already been fitted in terms of their respective arguments, the problem is solved, except for the routine multiplications and additions involved in eqs (1), (6), (7), and (8). A program can readily be written to obtain the value of y, that is, of F, for any V1, V2, and P, using eqs (1), (6), (7), (8) and the MFP or QFP fits shown in Tables 3 and 8.

2 The square roots of the eigenvalues are represented by the letter T, to avoid confusion with the ✪ of eq. (1).

« PreviousContinue »