Page images
PDF
EPUB

Appropriate Assessment

*

S. C. Parikh

Energy Division

Oak Ridge National Laboratory

Oak Ridge, Tennessee

Introductory Remarks

I had a talk planned that included a number of things dealing with what steps I intend to take in terms of documentation and usage of a model that I have been developing on the PILOT modeling project at Stanford. However, since we are behind schedule and since many of the things dealing with this model, called Welfare Equilibrium Model (WEM), can be found in a document that I am in the process of preparing [1], I will make my talk somewhat briefer and concentrate on the issues related to model assessment. I will do this, however, not from a perspective of a professional model assessor, or a phantom politician decision-maker who just lost an election because he blindly voted in accordance with the recommendation from a computer run of a model developed by his political foe, or a phantom politician decision-maker who just lost an election because he blindly voted following the advice contained in a shoddy model assessor's report on an output from a reasonable and valid model, but from my current perspective of a model builder who is concerned with improving the models and their contribution in the public policy arena.

Some of the things that I wanted to say on assessment have already been said before in this workshop, now about one and a half days old. But, at the same time, some of the things that I would not have said have also been said, and therefore, it would appear that it is somewhat useful to add my somewhat sketchy remarks.

My talk is divided into two parts. First, I would like to make four introductory remarks that are very much on my mind today, and I would like to share them with you. Next, I would like to present the key point of my talk, which is, a concept of appropriate assessment. My first remark, consisting of assorted but related observations, has to do with the contribution and role of model assessment.

*This paper was prepared while the author was at the Systems Optimization Laboratory, Department of Operations Research, Stanford University. It was presented at the Workshop on Validation and Assessment Issues of Energy Models, National Bureau of Standards, Washington, D.C., on January 10-11, 1979.

Listening to the professionals in the field of assessment, the impression I get is that there is a lot of product development of taxonomy. It would appear that adding to and expanding the existing taxonomy, including rhetorical overtones for some of the choices, is the way to understand how one can improve the understanding of the models.

On this score, I heard a talk by Bill Hogan [2] a couple of months ago in which he made a statement that "analysis of analysis is a growth industry". Yesterday, I heard Dave Wood say something to the effect that this industry has experienced a rapid growth, and might experience an equally rapid decline. He also talked of internalization of assessment. If you consider the matrix that Greenberger put up on the chalkboard a little earlier today, one might think of internalization of assessment as reducing research activity in the cell (3,3) (consisting of third party, institutionalized assessments) and increasing the activity in cells (1,1) (consisting of first party or modeler initiated assessment in an uninstitutionalized framework) and (1,3) (consisting of first party or modeler initiated assessment in an institutionalized framework).

All of these assorted observations are leading to the point of my remark, "In relative terms, should the trend be less towards model development and use, and more towards model assessments and assessments of model assessments?" If we have a workshop four years from now, will that workshop focus on modeling and its contribution to understanding of issues, or will that workshop be on assessments of the assessments that were done a few years ago?

The second point I would like to make arose, I am quite sure not for the first time, during the informal discussions at coffee break yesterday. Alan Goldman, Roger Glassey, I and couple of others were having a coffee chat, and one of us, I believe it was Goldman (I stand corrected if he didn't, and take the blame myself), who commented that, in any organization, the complexity of a model just goes beyond the point of manageability. I would like to add two more observations: first, that very few organizations are capable of building complex models, and second, that in depth assessments, the MIT Assessment Laboratory type, because they are costly, can be performed only on few models. Does this mean that we are headed towards fully assessed, complex, large-scale models that are unmanageable and therefore cannot be extensively used, even though they are credible?

Third of my introductory remarks has to do with a working definition of a large-scale model. Again I go back to that coffee conversation that included an idea. We talked of a large-scale model as the one that is large enough to allow one person to develop it, operate it, and use it, either independently or for one or more users. Some help

of experts from varied disciplines during model formulation and during initial stages is okay, but this concept of "pushing the limits of one person in managing it" is perhaps a very useful concept to think in terms of a large-scale model that is usable. Using this working definition of a large-scale model, one might think of a usable large-scale modeling system as a collection of modelers and models, each modeler operating a model, and the system functioning in response to a particular inquiry by an appropriate subset of modelers collaborating to produce quantitative analyses iteratively by each modeler producing outputs using a given set of inputs, modelers exchanging tables of numbers that revise inputs for the next iteration, generation of the next round of outputs on the basis of revised inputs, until a satisfactory intermodel correspondence is achieved.

My fourth introductory remark

has to do with modeling as a way

for quantitative analysts or technicians to effectively participate in the political process. Political process has, by and large, been inaccessible to the technical groups, and modeling provides a vehicle for this involvement.

With these introductory remarks, let me move on to the key point that I would like to make in the remainder of my talk.

Using Analysis in Public Decision Making

In Exhibit 1, I have attempted to draw a schematic to conceptualize what I have in mind when we say 'using quantitative analysis in decision making'. At the center, we have decision makers, planners, legislators, etc. They receive inputs from many different sources, such as their constituents, lobbyists, etc. Quantitative analysis forms one of such inputs. These inputs mold their thinking with regard to the problem at hand in order to aid them in developing plans, reaching decisions, or deciding on their vote. More often than not they have staff assistants who have the responsibility to analyze and evaluate these inputs, to identify implications of a particular decision, and to develop recommendations on optimal decision.

At the bottom of the exhibit, I have shown a professional group and its wares. This group, you might say is a group of quantitative analysts, econometricians, engineers, operations researchers, and model assessors. The professionals in this group work with some information base. By information base, I mean, raw observations from reality as well as transformed data. The transformed data might be obtained through use of models. The modelers use some of these data and produce transformations (in the form of computer printouts from models) which are also included in the information base. Scenarios and tabulations produced by the Energy Information Administration might be viewed as being a part of the information base.

[blocks in formation]
« PreviousContinue »