Page images
PDF
EPUB

I think it is wrong to review a model in the abstract. It should be reviewed in the context of what it was designed to do, or one of the things it was designed to do. I think looking at the real-world problem helps to focus one's thinking on what is important and what is not important.

If the assessor wants to suggest improvement opportunities, I think they should be done within some perspective. They might say what could be done, indicate the benefits of doing it, assess the feasibility of doing it, and then estimate the cost of doing it. People have suggested, for example, that our model ought to incorporate ash. I know that would be very expensive, however, and I can see very small benefits. The only appropriate way to suggest

this kind of improvement is after having analyzed the benefits and the costs.

Also, I don't think it is reasonable to review a model alone. I think that is a silly thing to do. A model ought to be reviewed along with the analyst or group of analysts. Let me use the example of my Texas Instruments calculator. It is a very fine piece of technology. little bit of time, I can compute a present value. very bright, but give it to him, and he could not. to evaluate the calculator based on my son's performance.

Give it to me, and with a
My four-year old son is
It would be inappropriate

I think that kind of analogy holds with models. Models are tools. In the hands of some craftsmen or analysts, they can be useful. In the hands of others, they may be useless or even destructive, if they become misleading.

Common Mistakes

Although I don't have time to dwell on them, I want to mention two common mistakes in modeling. One is the definition of the term "price." I think that people do not define price well; they often mix time periods and other notions; thus, the assessor should be very careful about the definition price.

The same is true for "inflation." The problem is not as serious as it used to be. But, very often inflation, and its effects on the financial variables, are not treated consistently within a model.

I will end with something on a light note. Our model was used, as I mentioned, on the new source performance standards study for DOE and EPA, whose purposes and interests often conflict in that arena. Soon the environmentalists began to use it, and the industry began to trust it. I was very pleased that our model and our use of it was accepted by all these groups. That was a goal of mine, to create a tool and a reputation, so that those divergent groups would all be willing to use the same calculator, as in the analogy I used earlier. And that happened.

However, I think things got carried away, because we went through three phases of the analysis, and in the last phase we had 22 scenarios. I thought it was best summed up by a reporter, who called me after the last time these numbers were presented. He was asking some cogent questions, and that had seldom happened in the past, so the process was working. But then at the very end, he asked a very good question. He said, "Are you going to make any more

runs?" I said, "No, I don't think so."

He said, "Thank God!"

DISCUSSION

Mr. Ford (Los Alamos): I have a question for Hoff. And it is in regard to the remark that, in attempting to reproduce historical behavior, you need to know the expectations--in this case the expectations of the electric utility officials--and since that is not known, perhaps, the exercise of reproducing historical behavior could be skipped. And that line of reasoning could apply to almost all models and, therefore, one might say, well, we will skip this particular test, and all attempts to increase our confidence in a model.

I would suggest, if you can't get good information on what people anticipated for the price of oil, and so forth, you demonstrate a set of expectations that, when fed into the model, create the historical investment decision, and then show those expectations so people can look at them and say they seem reasonable.

So, for a model that said electric utility officials expected the price of oil to go up by ten-fold in the next five years, one might suspect that that was an input that was jimmied to get the right investment decision. So, that would provide me, if I were looking at the model, one more test to look at to see how much confidence I could have in the device.

Dr. Stauffer: I think that is a good idea. But my comment was that you can't use actual historical data to feed the model. You must use expectations. You don't know expectations, so, you have to estimate them. Therefore, estimating the past is not necessarily any easier than estimating the future. But you have an interesting idea. I think that would be fun. It is a good point.

Dr. Nissen (Chase Manhattan Bank): Hoff, I would like to ask a leading question. And I am sorry that Lincoln Moses has just left, because he was the intended audience, but perhaps, the record can show the question.

One of the things that you have done is to provide us with a very impressive list of the kinds of data that have to go into even a piece of an energy system's model at the kind of level that generically we are talking about. Not simply data about reserves, and so forth, the kinds of data that the constituency of the Bureau of Mines is used to responding to. But, data about costs, measurements of price, economic quality impacts of beneficiation and preparation, data about transportation costs. And then when you get into utilities, you really get into the hard data--data that is, what you might call, high analysis content data. It is really not data that is recorded by a form, but it is data which is the output of an analysis process itself. Operating costs, environmental regulation impacts, generating transmission distribution, scrubbing performance standards and impacts, and so forth and so on. The question I have is how much help was the data side of EIA in producing the data which went into your model?

I ask this, remembering the fact that we were all very proud in 1974 that Eric Zausner's group, at the time he was an assistant administrator, was called data and analysis and that was to bring about a wonderful symbiosis. I also remember how it looked, four years later, when I left.

The second part of this leading question is, how responsive do you anticipate the EIA data side will be in the effort to respond to deficiencies in the data, as it is collected in the near future? That is, is there any substantive interaction between you and the so-called data groups within EIA?

Dr. Stauffer: The last question first. How responsive do I think they will be? I just don't know. Within the last six months or a year, with one exception, there has been essentially no interaction between us and them, but there may be interaction between the analytic part of EIA and the data part, and that I don't know about.

On the question of how much, what value was all that data they collect. The answer is some. It has evolved over time. For example, for reserves, we used to use their reserve data exclusively. We are getting to the point now where we are going to the raw geology reports and modifying and adding to that data base. On the power plant, the analytic side -things like capacity -- where we are on that is that we used to use their data, then we concocted what we call a master list, then we compare that master list to every other data source we ever see. When it is different, we call the power plant directly.

So, we think we now, and we call it our own, have the most updated variety
of that. Lots of the inputs to the model, however, are not historical
data or measurable things. They are engineering estimates. Like how
much does a new power plant cost? And that kind of a number usually comes
out of an analysis shop, or a technology shop, or they...

Dr. Nissen: You mean traditionally it has come out of an analysis shop.
Dr. Stauffer: Traditionally it has done that.

Dr. Nissen: What we can record is that the primary information side of the Energy Information Administration is providing almost no information to the analysis function.

Dr. Stauffer: Well, you said that.

Dr. Nissen: Excuse me. Institutional imperatives are to provide information to the cops, but not to the analysts.

VALIDATION: A MODERN DAY SNIPE HUNT?

CONCEPTUAL DIFFICULTIES OF VALIDATING MODELS

Peter W. House and Richard H. Ball

Search for a Valid Model

U.S. Department of Energy

Several years ago, one of the authors wrote a paper entitled, "Diogenes Revisited, the Search for a Valid Model."1/ Later, with John McLeod, this theme was taken and made a chapter in a book on large-scale modeling.2/ Rather than repeat those arguments given, let us capsulize and extend some of them here. In addition, we want to discuss a slightly different perspective which tries to suggest approaches to validation. The arguments can be focused on the following eight areas:

• Social science models often deal with phenomena at an empirical level where immutable natural laws cannot be ascertained; therefore, historical validation, even if possible, gives limited confidence that the resulting model has validity for predicting the long-term future.

The formal statistical techniques used for validation are based on
assumptions about the nature of the sample, such as the assumption
that the future has some known relation to the past; hence, they may
have difficulties similar to those of historical validation.

There is little agreement as to what it means to be able to predict
the future, with or without formal models.

• We are still very much in our infancy when it comes to measuring the state-of-the-present, using such techniques as indicators; consequently, we are hard put to say whether we have reasonable gauges with which to measure the future.

• Complex models are harder to validate than simple ones; but for most modern day problems, more complex approaches are necessary for policy analyses.

• Validation must be considered in relation to the type of model and to the purpose for which the model is used. Each combination has different implications for the feasibility, appropriateness and specific technique of validation.

• Models used to aid decisions and policy analysis should be judged on the basis of their utility in aiding decisions relative to alternative procedures, rather than on the same basis as models used in science.

There are risks in insistence on validation, since inappropriate application of validation could unfairly discredit models that have real utility.

1/ House, Peter W., Diogenes Revisited, the Search for a Valid Model, March 1974, Washington Environmental Research Center, Office of Research and Development, U.S. Environmental Protection Agency, Washington, D.C.

2/ House, Peter W., and McLeod, John, Large-Scale Models for Policy Evaluation, New York: John Wiley and Sons, Inc., 1977, pp. 66-75.

« PreviousContinue »