Page images
PDF
EPUB

only for overload (its resistance factor is unity). Assigning a larger factor to live load than dead load reflects the fact that the variability in live load is known to be

larger than dead load and thus is a tacit attempt to make the safety more uniform over the

[blocks in formation]

However, the load and resistance factors have been selected more or less on the basis of subjective judgment in the past. While they may seem reasonable intuitively, there is

no assurance that the design criteria are entirely consistent with the performance objectives of the groups that develop them. In the context of the limit states design process discussed in Section 1.1, Step 2 cannot be completed in a rational manner.

1.2.3 Probability-Based Limit States Design

In Section 1.1, limit states design was defined as being a three stage procedure, the second stage of which involves determination of acceptable levels of safety against the occurrence of each limit state. In probability-based limit states design, probabilistic methods are used to guide the selection of load factors and resistance factors which account for the variabilities in the individual loads and resistances and give the desired overall level of safety. This is described further in Chapter 2. It should be emphasized that the designer deals with load factors and resistance factors similar to those in Eqs. 2.1 and 2.2 and is never required to consider probabilities per se. The particular format adopted in this report is referred to as load and resistance factor design (LRFD).

The principal advantages of probabilistic limit states design are:

(1) More consistent reliability is attained for different design situations because the different variabilities of the various strengths and loads are considered explicitly and independently.

(ii) The reliability level can be chosen to reflect the consequences of failure. (iii) It gives the designer a better understanding of the fundamental structural

requirements and of the behavior of the structure in meeting those requirements.

(iv) It simplifies the design process by encouraging the same design philosophy and procedures to be adopted for all materials of construction.

(v) It is a tool for exercising judgment in non-routine situations.

(vi) It provides a tool for updating standards in a rational manner.

The remainder of this report is devoted to the derivation of load factors that are suitable for a wide range of loadings and structural materials.

2. PROBABILISTIC BASES OF STRUCTURAL RELIABILITY

2.1 Historical Development

Engineering decisions must be made in the presence of uncertainties arising from

inherent randomness in many design parameters, imperfect modeling and lack of experience. Indeed, it is precisely on account of these uncertainties and the potential risks arising therefrom that safety margins provided by the specification of allowable stresses, resistance factors, load factors, and the like, are required in design. While strength and load parameters are nondeterministic, they nevertheless exhibit statistical regularity. This suggests that probability theory should furnish the framework for setting specific limits of acceptable performance for design.

The idea that dispersion (or statistical variation) in a parameter such as yield stress or load should be considered in specifying design values is not new, and many standards have recognized this for some time. For example, the design wind speeds and ground snow loads in ANSI Standard A58.1-1972 [2] are determined from the probability distributions for the annual extreme fastest mile wind speed and the annual extreme ground snow load. For ordinary structures, the design value for these parameters is that value which has a probability of being exceeded of 0.02 in any year (the 50-year mean recurrence interval value). Similarly, the acceptance criteria for concrete strength in ACI Standard 318-77 [19] are designed to insure that the probability of obtaining concrete with a strength less than f' is less than 10 percent. Other examples could also be cited. An appreciation of the philosophy underlying such provisions is essential: in the presence of uncertainty, absolute reliability is an unattainable goal. However, probability theory and reliability-based design provide a formal framework for developing criteria for design which insure that the probability of unfavorable performance is acceptably small.

C

While this basic philosophy has been accepted for some time, there have been no standards adopted in the United States which synthesize all the available information for purposes of developing reliability-based criteria for design. The use of statistical methodologies has stopped at the point where the nominal strength or load was specified. Additional load and resistance factors, or allowable stresses, were then selected subjectively to account for unforeseen unfavorable deviations from the nominal values. However, probability theory and structural reliability methods make it possible to select safety factors to be consistent with a desired level of performance (acceptably low probability of unsatisfactory

performance).

This affords the possibility of more uniform performance in structures and,

in some areas where designs appear to be excessively conservative, a reduction in costs.

The remainder of Chapter 2 is devoted to describing the procedures used for analyzing reliabilities associated with existing designs and developing the probability-based load criterion for the A58 Standard.

2.2 Analysis of Reliability of Structures

The conceptual framework for structural reliability and probability-based design is provided by the classical reliability theory described by Freudenthal, Ang, Cornell, and others [1,8]. The loads and resistance terms are assumed to be random variables and the statistical information necessary to describe their probability laws is assumed to be

known.

A mathematical model is first derived which relates the resistance and load variables

[merged small][ocr errors]

where X. =

i

Suppose that this relation is given by

(2.1)

resistance or load variable, and that failure occurs when g< 0 for any ultimate or serviceability limit state of interest. Failure, defined in a generic sense relative to any limit state, does not necessarily connote collapse or other catastrophic events.

Then safety is assured by assigning a small probability Pf to the event that the limit

[blocks in formation]

f.

n

(2.2)

in which fy is the joint probability density function for X1, X2,..., and the integration is performed over the region where g < 0.

In the initial applications of this concept to structural safety problems, the limit state was considered to contain just two variables; a resistance R and a load effect Q dimensionally consistent with R. The failure event in this case is R

probability of failure is computed as,

Q< 0 and the

[blocks in formation]

in which FR

= cumulative probability distribution function (c.d.f.) in R and f

[blocks in formation]

density function for Q. If R and Q both have normal distributions, for example, then

[blocks in formation]
[blocks in formation]

The c.o.v. is a convenient dimensionless measure of variability or uncertainty and will be referred to frequently in the remainder of the report. Other distributions may be specified for R and Q. When this is done, Eq. 2.3 frequently must be evaluated numerically.

This provides a basis for quantitatively measuring structural reliability, such a measure being given by Pf. It is tacitly assumed that all uncertainties in design are However, in structural

contained in the joint probability law f, and that f. is known.

X

X

reliability analyses these probability laws are seldom known precisely due to a general scarcity of data. In fact, it may be difficult in many instances to determine the probability densities for the individual variables, let alone the joint density f, In some cases,

only the first and second order moments, i.e. mean and variance, may be known with any Moreover, the limit state equation may be highly nonlinear in the basic

confidence.

variables.

Even in those instances where statistical information may be sufficient to define the marginal distributions of the individual variables, it usually is impractical to perform numerically the operations necessary to evaluate Eq. 2.2.

2.3 First-Order, Second-Moment Methods

The difficulties outlined above have motivated the development of first-order, secondmoment (FOSM) reliability analysis methods, so called because of the way they characterize uncertainty in the variables and the linearizations performed during the reliability analysis [7,15]. In principle, the random variables are characterized by their first and While any continuous mathematical form of the limit state equation is possible, it must be linearized at some point for purposes of performing the reliability analysis. Linearization of the failure criterion defined by Eq. 2.1 leads to

second moments.

[blocks in formation]

with respect to this linearized version of Eq. 2.1. As might be expected, one of the key

considerations is the selection of an

2.3.1 Mean Value Methods

appropriate linearizing point.

In earlier structural reliability studies, the point (x1, x2,...x*)

.X was set equal to

n

the mean values (X, X2,...X). Assuming the X-variables to be statistically uncorrelated, the mean and standard deviation in Z are approximated by

[merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

The extent to which Eqs. 2.7 and 2.8 are accurate depends on the effect of neglecting higher order terms in Eq. 2.6 and the magnitudes of the coefficients of variation in XIf g() is linear and the variables are uncorrelated, Eqs. 2.7 and 2.8 are exact.

by

The reliability index 8( in some studies, 8 is termed the safety index) is defined

[merged small][ocr errors][merged small]

which is the reciprocal of the estimate of c.o.v. in Z. This is illustrated in Fig. 2.1 which shows the densities of Z for two alternate representations of the simple two-variable problem Z = g(R, Q) = 0 discussed in the previous Section (Eq. 2.2, et. seq.). 8 is the distance from Z to the origin in standard deviation units. As such, B is a measure of the probability that g() will be less than zero. Fig. 2.1a shows the probability density function (generally unknown) for Z = R

Q. The shaded area to the left of zero is equal
Observe that if σ, remains constant, a positive shift in
R-Q

to the probability of failure.
R-Q will move the density to the right, reducing the failure probability.

Thus an

increase in ẞ leads to an increase in reliability (lower P). Alternatively, if

[ocr errors][merged small][ocr errors][merged small]

the reliability is at least ß. Figure 2.1b shows an alternate formulation derived from

[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors]

Using the alternative formulation of Fig. 2.1b, and using the small-variance approximations

[merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

Eq. 2.11 was the basis for an early recommendation for a probability-based structural code

[22] while Eq. 2.12 was the basis for the development of probability-based load and resistance factor design criteria for steel structures [9].

« PreviousContinue »