Page images
PDF
EPUB

no.

Mr. McKay: First part.

Dr. Murphy: The first part of the talk was really only trying to come up with a sensible organization for the validation process. Could we compartmentalize it on the various model components? The answer is Can we try and eliminate people's value judgments to degree as well as difference? The answer was no. So far as pragmatic model validation, I think that the general conclusion of the first part is that it is apparent to me that there is no simplification of the process. That doesn't mean that I'm going to convert that into a point of action, except to realize that you have to worry about all the complexity.

Dr. Wood: A few comments and a question Fred. It seems to me it is a bit strong that the MIT approach does not include invalidation correspondence of the model with theory. For example, I think a lot of our report is given over to the structural validity of that model. I think that there is also a strong emphasis, perhaps the greatest emphasis, on the validity of the model with respect to particular policy applications. So this notion that we tend to always be saying what is wrong with the model, it is important to put that in context. It is always in the context of a particular application. I think also that we pay attention to the other elements of validation. For example, in the discussion of the demand model, that is almost entirely a structural validation exercise in which we compare that model to economic theory and make inferences about what we would expect the model to look like, check it, and then verify that it, in fact, is implemented that way.

I found your discussion of how you are organizing it extremely provoca-
tive and I think that it is going to be interesting to see how you are
going to thread your way between assessment and countermodeling. That
is you are talking about constructing a counter model now which involves
a different formulation of utility investment behavior of profit maximizing
under a regulated rate of return, that is going to involve the new
model. At the end of that process it will be interesting to see what you
can say about validity and verification of the original model and how
that relates to your counter modeling activities. That is what I am
concerned about and is what Dave Kresge was talking about.
Won't your
objectives become a little mixed--on the one hand you are a scientist
seeking knowledge on how utilities behave and on the other hand you are
an analyst of this particular model attempting to validate and verify
it and it seems to me that is a tricky road to walk down.

Mr. Murphy: Is that the question?

Dr. Wood: There is a question in that. The third thing I was going to say was I wondered how you reacted to--that is there are other people, Saul is one of them, that feel that you can structure scoring systems for comparing models and I was wondering what you thought about those propositions?

The

Mr. Murphy: The first thing I guess I should clarify my statement in that the MIT approach was to start in the model and then work out. approach we are taking is let's not even look at the model to begin with

[ocr errors][ocr errors]

Essen

and let's ask what we ought to have--what are we looking for. tially, I am pursuing in part Hoff's approach. It says the assessment process ought to be parallel to the model building process because you want to know what you need first. So my approach is to start and ask what is the menu of things that you think you ought to have and then start asking does the model capture that? The MIT assessment started with--and again it is because of the circumstances--I know the PIES Utility model--you didn't know the Baughman-Joskow model--see, you went through and you did an assessment of what was there which then led you to look out, should this have been done differently? So that is the emphasis I meant to make.

Your second question was--I don't know that there can be a distinction in what should be the consequence of the assessment process? Should it just be information for third party judgments or should--a tremendous amount of funds are expended because model assessment costs about as much as model development--it be channeled to offering the positive suggestions to model improvements. In other words, the glass is half

full rather than the glass is half empty philosophy and so I don't see any conflict there. I see that there is a possibility for the assessor to become stale if he lives within the world of his model too long, but I think that the consequence of the assessment process has to be improvement, or a statement that improvement is unnecessary. The only way to know if the improvement is unnecessary is to try the improvement and to measure the difference and see if the difference is worth the effort. The question, can you put a rating scheme on models? Well, if you go other than zero and one, or minus one plus one, and you go from zero to twenty including all the numbers in between or the integers in between, you have gone to a cardinal scheme. So what we said in the first part you can still be consistent. Essentially what happens when you go to a cardinal scheme is the property you give us is independent. George Lady has a very nice corollary to that axiom in showing really the extent to which that is--where the ordinality is the key requirement. As soon as you add more than two numbers you are cardinal.

REFERENCES AND BIBLIOGRAPHY*

[1]

[2]

[3]

Arrow, K.J., Social Choice and Individual Values,
New York, Wiley, 1963.

Fishburn, P.C., "A Survey of Multiattribute/Multi-
criterion Evaluation Theories," in Stanley Zionts
(ed.), Multiple Criteria Problem Solving, Springer-
Verlag, New York, New York, 1978.

Fishburn, P.C., "Lexicographic Orders Utilities and
Decision Rules: A Survey," Management Science,
Volume 20, No. 11, 1974 (pp. 1442-1471).

This paper was prepared to contribute to the solution of the definitional problem of model validity. It was discussed at the Symposium For Model Assessment/Validation at the National Bureau of Standards (January 10-11, 1979) funded by the Energy Information Administration (EIA).

The authors wish to thank George Lady for his many helpful comments and Pat Green for her typing of this paper.

Additional copies of this report are available from:

Energy Information Administration Clearinghouse
1726 M Street, N.W.

Room 210

Washington, D.C. 20461
202-634-5641

See also the voluminous bibliography provided by
Fishburn [2,3].

THE IMPACT OF ASSESSMENT ON THE MODELING PROCESS

David Nissen*

When Saul Gass invited me to present a paper to this conference, I welcomed the opportunity for two reasons. First, it offered a chance to organize my personal perspective on a very exciting and fruitful period of my own professional life. (In 1974-77, I participated in, and later directed, the Project Independence Evaluation System (PIES) modeling and policy analysis activity at the Federal Energy Administration.) Second, I could present for public scrutiny some hard-bought and, I hope, useful lessons drawn from that experience.

BACKGROUND

Energy modeling for policy analysis is a burgeoning industry by any standard. The PIES effort served as a constituent of this success, and as an example of the problems which success creates. PIES also served as a seed-irritant in the energy policy advocacy process, which set in motion forces leading to a focused and institutionalized concern with energy model validation and assessment. Our presence at this workshop is a consequence of that concern. To understand this concern and how to meet it, it is valuable to examine the context in which it evolved.

PIES was initially developed to coordinate the quantitative assessment of the Administration's response to the embargo and oil price run-up of 1973-74 and the changed energy perspective which these events induced.

At first, the modelers had to convince the immediate clients, their management, of the accuracy and relevance of the model, and of its responsiveness within the policy decision horizon. This occurred during, not after, development of the model, which meant the first level of users was unusually familiar with the innards of the model. (The point is that the decision to develop and use the model was itself a policy issue--it was expensive and risky and required a lot of interagency organization. fine structure of a model in place could never have commanded or sustained this level of attention by the management on its own merit.)

*The author is Vice-President, Energy Economics, Chase Manhatten Bank, N.A. He is grateful for comments and criticisms to Edward Cazalet, Harvey Greenberg, William Hogan, David Knapp, George Lady, Fred Murphy, Lee Nissen, Warner North, James Sweeney, and James Wallace. The views expressed here are the author's and do not represent the position of any institution.

Because of the client/management's familiarity, modelers were asked to be, and were willing to be, much more adventurous in modeling scope--the breadth of phenomena and policy issues that were integrated into the model--than is the case in the more usual analyst/policy-maker relationship.

In other words, from the viewpoint of both the client/management and the analyst/modeler there was a high immediate payoff to model enhancement for new or more accurate and sophisticated policy evaluation while at the same time there was minimal immediate need for the more formal exegesis (including but not limited to documentation) that a more distant relationship between modeler and client requires for success.

It is not surprising that the allocation of resources within the modeling process reflected this emphasis on development, to the detriment of investment in formal external communication of the model's nature. This emphasis is apparent throughout the four major epochs of PIES' formal existence (the name and function of the model have changed under the present DOE management and organization). These are:

1974--construction of data and logic of the first version (the competitive equilibrium version) of PIES to produce the quantitative analysis for the Project Independence Report,

1975--extension of structure, including oil price-control
modeling, and consolidation and extension of data to make the
model reliable and robust for state of the world and policy
scenario variations published in the 1976 National Energy Outlook,

1976--refined capability for policy analysis including gas regulation modeling (82 scenarios implementing a 50-page policy analysis specification) published but not disseminated in the 1977 National Energy Outlook (Draft),

1977--analysis of the National Energy Plan--adaptation of the
model's structure to coordinate analysis of the conservation, fuel
pricing and fuel management policy options being considered and
advocated by the present administration, the results being
published in the April 1977 white book, The National Energy Plan
(Energy Policy and Planning, Executive Office of the President,
April 29, 1979), and subsequent White House fact sheets and backup
documentation.

By September of 1977, when the Carter Administration's National Energy Plan had reached the Senate (there to languish for a year) the education which the model could provide, and which the Administration was willing to absorb, was largely complete. The national debate on energy policy had been joined on larger questions of institutional means and of interclass and intergenerational equity which PIES couldn't begin to organize or resolve.

« PreviousContinue »