Page images
PDF
EPUB

GENERAL METHODS

OF

PHYSICAL INVESTIGATION.

THE object of all Physical Investigation is to determine the effects of certain natural forces, such as gravity, cohesion, heat, light and electricity. For this purpose we subject various bodies to the action of these forces, and note under what circumstances the desired effect is produced; this is called an experiment. Investigations may be of several kinds. First, we may simply wish to know whether a certain effect can be produced, and if so, what are the necessary conditions. To take a familiar example, we find that water when heated boils, and that this result is attained whether the heat is caused by burning coal, wood or gas, or by concentrating the sun's rays; also whether the water is contained in a vessel of metal or glass, and finally that the same effect may be produced with almost all other liquids. Such work is called Qualitative, since no measurements are needed, but only to determine the quality or kind of conditions necessary for its fulfilment. Secondly, we may wish to know the magnitude of the force required, or the temperature necessary to produce ebullition. This we should find to be about 100° C. or 212° F., but varying slightly with the nature of the vessel and the pressure of the air. Thirdly, we often find two quantities so related that any change in one produces a corresponding change in the other, and we may wish to find the law by which we can compute the second, having given any value of the first. Thus by changing the pressure to which the water is subjected, we may alter the temperature of boiling, and to determine the law by which these two quantities are connected, hundreds of experiments have been made by physi

cists in all parts of the world. The last two classes of experiments are called Quantitative, since accurate measurements must be made of the quantity or magnitude of the forces involved. Most of the following experiments are of this nature, since they require more skill in their performance, and we can test with more certainty how accurately they have been done. Having obtained a number of measurements, we next proceed to discuss them by the aid of the mathematical principles described below, and finally to draw our conclusions from them. It is by this method that the whole science of Physics has been built up step by step.

Errors. In comparing a number of measurements of the same quantity, we always find that they differ slightly from one another, however carefully they may be made, owing to the imperfection of all human instruments, and of our own senses. These deviations or errors must not be confounded with mistakes, or observations where a number is recorded incorrectly, or the experiment improperly performed; such results must be entirely rejected, and not taken into consideration in drawing our conclusions.

If we knew the true value, and subtracted it from each of our measurements, the differences would be the errors, and these may be divided into two kinds. We have first, constant errors, such as a wrong length of our scale, incorrect rate of our clock, or natural tendency of the observer to always estimate certain quantities too great, and others too small. When we change our variables these errors often alter also, but generally according to some definite law. When they alternately increase and diminish the result at regular intervals they are called periodic errors. If we know their magnitude they do no harm, since we can allow for them, and thus obtain a value as accurate as if they did not exist. The second class of errors are those which are due to looseness of the joints of our instruments, impossibility of reading very small distances by the eye, &c., which sometimes render the result too large, sometimes too small. They are called accidental errors, and are unavoidable; they must be carefully distinguished from the mistakes referred to above.

Analytical and Graphical Methods. There are two ways of discussing the results of our experiments mathematically. By the first, or Analytical Method, we represent each quantity by a letter,

and then by means of algebraic methods and the calculus draw our conclusions. By the Graphical Method quantities are represented by lines or distances, and are then treated geometrically.

The former method is the most accurate, and would generally be the best, were it not for the accidental errors, and were all physical laws represented by simple equations. The Graphical Method has, however, the advantage of quickness, and of enabling us to see at a glance the accuracy of our results.

ANALYTICAL METHOD.

A

Mean. Suppose we have a number of observations, A1, A2, A3, A4, &c., differing from one another only by the accidental errors, and we wish to find what value A is most likely to be correct. If A was the true value, A1 ·A, A2 A, &c., would be the errors of each observation, and it is proved by the Theory of Probabilities that the most probable value of A is that which makes the sum of the squares of the errors a minimum. Also that this property is possessed by the arithmetical mean. Hence, when we have n such observations, we take 4 (4, +42+A3+ &c.)÷n, or divide their sum by n. Thus the mean of 32, 33, 31, 30, 34, is 160532. It is often more convenient to subtract some even number from all the observations, and add it to the mean of the remainder; thus, to find the mean of 1582, 1581, 1583, 1581, 1582, subtract 1580 from each, and we have the remainders 2, 1, 3, 1, 2. Their mean is 95 1.8, which added to 1580 gives 1581.8. Where many numbers are to be added, Webb's Adder may be used with advantage.

Probable Error. Having by the method just given, found the most probable value of A, we next wish to know how much reliance we may place on it. If it is just an even chance that the true value is greater or less than A by E, then Eis called its probable error. To find this quantity, subtract the mean from each of the observed values, and place A1 A = e1, A2 A &c. Now the theory of probabilities shows that E = .67√e2 + e22 + &c.,÷n, from which we can compute E in any special case. As an example, suppose we have measured the height of the barometer twenty-five times, and find the mean 29.526 with a probable error of .001 inches. Then it is an even

<= 129

2

2

chance that the true reading is more than 29.525, and less than 29.527. Now let us suppose that some other day we make a single reading, and wish to know its probable error. The theory of probabilities shows that the accuracy is proportional to the square root of the number of observations, or that the mean of four, is only twice as accurate as a single reading, the mean of a hundred, ten times as accurate as one. Hence in our example we have 1:25 = .001 : .005, the probable error of a single reading. Substituting in the formula, we have the probable error of a single reading, E' = E× √n = .67√ej2 + e +&c.÷√n. It is generally best to compute E' as well as E, and thus learn how much dependence can be placed on a single reading of our instrument.

2

2

Weights. We have assumed in the above paragraph that all our observations are subject to the same errors, and hence are equally reliable. Frequently various methods are used to obtain the same result, and some being more accurate than others are said to have greater weight. Again, if one was obtained as the mean of two, and the second of three similar observations, their weights would be proportional to these numbers, and the simplest way to allow for the weights of observations is to assume that each is duplicated a number of times proportional to its weight. From this statement it evidently follows that instead of the mean of a series of measurements, we should multiply each by its weight, and divide by the sum of the weights. Calling A1, A2, &c., the measurements, and w1, w2, &c., their weights, the best value to use will be A (A1 w1 + A2 w2+ &c.) ÷ (w2+ w2+ &c.). We may always compute the weight of a series of n observations, if we know the errors e1, e2, &c., using the formula w = n÷ 2(e ̧2 + e22 + e2+ &c.). Substituting this value in the equation for probable error, we deduce E = 477 ÷ √nw if all the observations have the same weight, or E=477/w1 + w2+ &c., if their weights are W1, W2, &c.

=

1

2

Probable Error of Two or More Variables. Suppose we have a number of observations of several quantities, x, y, z, and know that they are so connected that we shall always have 0 1 + ax +by+cz. If the first term of the equation does not equal 1, we may make it so, by dividing each term by it. Call the various values x assumes x', x", x'", those of y, y', y", y'", and those

of z, z', z′′, z''', and so on for any other variables which may enter. If we have more observations than variables, it will not in general be possible to find any values of a, b and c which will satisfy them all, but we shall always find the left hand side of our equation instead of being zero will become some small quantity, e', e', e'", so that we shall have:

[blocks in formation]

and so on, one equation corresponding to each observation. These are called equations of condition. Now we wish to know what are the most probable values of a, b and c, that is, those which will make the errors e', e', e"", as small as possible. As before, we must have the sum of the squares of the errors a minimum. We therefore square each equation of condition, and take their sum; differentiate this with regard to a, b and c, successively, and place each differential coefficient equal to zero. These last are called normal equations, and correspond to each of the quantities a, b and c, respectively. The practical rule for obtaining the normal equations is as follows:- Multiply each equation of condition by its value of x (or coefficient of a), take their sum and equate it to zero. Thus x'(1+ ax' + by' + cz') + x′′ (1 + ax" + by" + cz") + &c. =0, is the first normal equation. Do the same with regard to y, and each other variable in turn. We thus obtain as many equations as there are quantities a, b and c to be determined. Solving them with regard to these last quantities, and substituting in the original formula 0 = 1 + ax + by + cz, we have the desired equation. As an example, suppose we have the three points, Fig. 1, whose coördinates are x'′ = 1, y′ 1, y' = 1, x" 1, x′′ = 2, y′′ = 2, m'' 3, y'" and we wish to pass a straight line as nearly as possible through them all. We have for our equations of conditions: 0 1 + a + b, 0 = 1 + 2a + 26, 0 1 + 3a + 46. Applying our rule, we multiply the first equation by 1, the second by 2, and the third by 3, the three values of x, and take their sum, which gives 1+ a + b + 2 + 4a + 4b + 3 +9a+12b = 6 + 14a + 1760. For our second normal equation we multiply by 1, 2 and 4,

M

Fig. 1.

N

4,

« PreviousContinue »