Page images
PDF
EPUB

DR. GASS: The Panel has priority, Harvey.

DR. GREENBERGER: When you look at the dilemma that the economics profession faces today trying to understand and explain a stagnant economy that suffers simultaneously from unemployment and inflation, isn't that an example of data destroying theories?

DR. GASS: Harvey Greenberg.

DR. GREENBERG: I wanted to respond to David Freedman's comment. One of the key assumptions that I assume he is alluding to and what has been violated has to do with correlation of right-hand side variables. The theory says that if you assume this and do this, then this is what you conclude about confidence, and so on. It means that you have a sufficient condition for some of the inferences you want to make statistically. It does not mean that, if they are violated, the thing is bad. Now, I have done a variety of statistical modeling and forecasting, like in health care and other places, and I do not know of any instance where all of the right-hand side variables are statistically independent and, yet, many of these models are, in fact, useful. And I do not think the judge of usefulness comes from whether you satisfy the sufficient conditions for statistical theory to be valid.

DR. GASS: Thank you, Harvey. Yes, please?

MR. WOOD: Tom Wood from GAO. I have a question, or comment actually. Something has been gnawing on me listening to the question of validation and the purposes of putting forth these models. In a sense, we are trying to provide people better information upon which to make better decisions. The implicit thing being that they did not know as much in the past, so they made bad decisions or, shall we say, not optimum decisions.

I sort of wonder then, if in trying to validate, that modeling isn't approaching some sort of Heisenberg's Uncertainty Principle. If you validate from the past with the "bad decisions," attempting to extrapolate them when you are providing "better information," then it is not directly extrapolatable. So, in other words, the question is, if you are attempting to model a decision process and, by attempting to use as a basis how people made decisions in the past, if those decisions were made badly or not as perfectly as they could be with our great models, then haven't we broken down the ability to use the past?

Again, as Heisenberg said, you cannot specify position and momentum at the same time; at least, if you specify one very well, you lose knowledge of the other. And I sort of question, then, the question of validation. Are we trying to model the differences, decisions, or maybe in the end we are, if we want to backfit in this generic sense, modeling flows rather than decisions?

that?

DR. GASS: Thank you. Does any physicist want to make a comment on

MR. JOEL: Saul, can I handle a -

DR. GASS:

How could I say no? If you use the microphone, Lambert, and keep it to a couple of minutes.

Look, the point

MR. JOEL: This is going to be less than 30 seconds. is, you do not make a decision just once and then go away and God will destroy the Universe in a thunderclap. The idea of giving policymakers slightly better information is that it is not merely that they haven't been able to make optimal decisions in the past. The more nearly good their decisions are, the less frequently they are going to have to change them and, if they have got better information, they are just going to have to do this with less frequency. That is all.

DR. GASS: Thank you, Lambert. I would like to go for just a couple more questions and we will then stop. I know they cut off the heat, but they will put off the lights pretty soon. First, does the Panel have a comment on that? Yes, Dave?

DR. NISSEN: I wanted very briefly to respond to the Heisenberg question. The problem is even more extreme than that. With a proper model of social behavior, we know that government behavior is entirely endogenous and, therefore, we can conclude a fortiori that the entire effect of government policy net is nil.

DR. GASS: Harvey?

DR. WAGNER: I, too, purely as a member of the audience, would like to congratulate you and the others on how fine the conference was in terms of its being comprehensive and thought-provoking.

My interest here, if I may share it with you, is to try to get some perspective on what all of this issue is about because that is fairly new to me, at least in the field of energy, and, if you will permit it, let me share a couple of thoughts of perspective.

The first one is that, for most of us that have been here at the conference, we do have a certain comparative advantage. The comparative advantage, it seems to me from those of you whom I know, is in model building and policy analysis. Another aspect, in terms of impact of models and policy analysis--the politics of it and so on--as important as those aspects are, they really do not play to our own comparative advantage. The net of all of that for me is that we should really give our major emphasis to having better models and better approaches. Other people, who are concerned with some of these other aspects and have a comparative advantage will inevitably pick up those themes. To the extent that we have to sort out our time priorities and our money priorities, we ought to sort them out in a way that we feel best gives a chance to improve models and to improve analysis.

That leads to the second point. I make this because, obviously, the conference was sponsored in part by DOE and we have EPRI here, as well as several other institutions that are concerned with this kind of research. It seems to me that, given a number of comments that were made, both positive, as well as cautious, that, for the time being, that is for the next few years, the institutions involved here should do everything that they are capable of doing to further lots of model-building efforts, and to try to home in on one or two or three and to make them perfect. The best thing that could come out of all of this is the competition among scientists and model builders for approaches and ideas on how to handle these problems. It is really too early to home in.

DR. GASS: Thank you, Harvey.

SPEAKER: He is right!

MR. EVERETT (DOE): I find myself to be mainly a user of models but, also, unfortunately, a caretaker of models that either people on the Panel left me with or certainly other people in this room. At this point, I know the budgetary process is going to be somewhat less than heartening over the next few years and, given that I, myself, and certainly the people within EIA that do most of the analysis of modeling have 50 or 60 of these beasts. How on earth should we choose, given the meager funds that likely will come to this project, which ones we are going dissect, which should come first? The READ model certainly seems to be a very big target at this particular point but, what next? I would like an answer from the Panel on this one.

DR. LADY: Do you want an answer from me, Charles? Assuming that it doesn't take too many more years to figure out what to do, which may be a very brave assumption, it seems sensible to expect that a reasonable approximation of many of the good ideas that have come up today can be completed on the cycle of model development which will be different. Depending upon what you are talking about, something on the order of three or four years. That is an answer. Is that an answer to your question?

DR. GASS: Charlie, you were concerned about the models in being right now, I gather.

DR. WAGNER: One problem I have is there are models that exist that have been used for forecasting and we put our names next to the forecasts, we publish them, and some of those are going to be replaced. Where there was one, there may be three models in a year. Why don't you pick a model that is an embryo at this point and, before we use it and forecast with it, validate it, rather than something that will be entered into the record as perhaps a bad experiment?

DR. LADY: I think that is the idea, but we have to know what to do. Given that we know what to do, or at least have agreement on some things to do, the idea is to embody it in the model development process.

DR. RICHELS: In the case of our first assessment, it was a model that we could assess. If the documentation is not there or if you do not have the cooperation of the modeler, you might as well forget the assessment, at least at this stage of the game.

What is the model being

Secondly, it is the value of information. used for? Is it being used for important policy decisions? That is where we find the greatest need for assessment.

DR. GASS: Any other comments from the Panel? Well, I personally would like yes, please?

[ocr errors]

DR. GLASSEY: I am hearing about the strategies here. Every model developer that is currently under contract to the EIA to do models, must have his models assessed before we pay him.

DR. GASS: That is if we can set the ground rules. Alan, would you like to make a comment? Alan Goldman.

DR. GOLDMAN: Two very quick remarks. One of them is a distinctly self-serving suggestion to the Chairman. Some of us may have some reactions to this meeting which we were unable so quickly to articulate, or will not articulate now because of the lateness of the hour. Perhaps you might care to declare the proceedings open to late submissions to these remarks.

DR. GASS: Yes, that is definitely true. The deadline is March 31. DR. GOLDMAN: Okay. My second comment is again as representative of the host institution to thank you for the quality of your discussions from the floor, delivered papers, and zitzfleisch.

DR. GASS: Thank you very much. I would like to thank the Panel, both as a Panel and speakers. I would like to thank the other speakers, and I really would like to thank this tenacious audience for staying with us. Thank you very, very much.

[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][ocr errors][merged small][merged small][ocr errors][merged small][ocr errors][merged small][ocr errors][merged small][ocr errors][ocr errors][merged small][ocr errors][merged small][ocr errors][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

Coal and Electric Utilities Model..C. Hoff Stauffer, Jr.,

[merged small][merged small][ocr errors][merged small]
[merged small][ocr errors][merged small][ocr errors][merged small][merged small][ocr errors][merged small][ocr errors][merged small][ocr errors]

ICF

.Peter W. House, DOE
Richard Ball, DOE

3:00... Third Party Model Assessment.......Richard Richels, EPRI

3:15...Coffee

3:45...Reflections on the Model Assessment

David Kresge, MIT

Process: A Modeler's Perspective.. Martin L. Baughman,
U. of Texas

4:15...The Texas National Energy Modeling
Project and Evaluation of EIA's
Energy Midrange Forecasting Model..Milton Holloway, TEAC

4:45... Assessment of the Midterm Electric

Utility Submodel......

5:00...Model Management Issues...

....Fred Murphy, DOE

..Saul I. Gass,
U. of MD/NBS

« PreviousContinue »