Page images
PDF
EPUB

Two examples of services in support of NASA will illustrate both the nature of the more sophisticated support services and the difficulties faced by government managers in monitoring them. In 1978, the Goddard Space Flight Center awarded a contract to the Space Systems Division of General Electric for operating a ground station to support the launch of the Landsat-D satellite, for which GE was also the prime contractor. By early 1982, the Landsat Ground Segment had become a project in itself, with more than 300 engineers, computer scientists, and technicians housed in a building at Goddard. The Ground Segment was a very large, complex, totally integrated network of automated data processing equipment; the software alone amounted to more than 680 000 lines of programming code (ref. 170). Even GE seems to have been a little awed by the magnitude of the task. As the Space Systems Division's house organ boasted, the final configuration of the Ground Segment consisted of "15 computers, 44 disk drives, 36 tape drives and 30 terminal interfaces, as well as 22 racks of special purpose hardware . . ." (ref. 171.) The Landsat Ground Segment was a support activity in the strictest sense of the term. Yet the lack of in-house staff available to design a system such as the Ground Segment also made the job of evaluating the contractor's effort that is, whether the government was getting good value for the money difficult, if not impossible.

As our second example, consider the use of the computer systems in wind tunnels. In engineering jargon, the development of computer programs is the "pacing item" in the field of computational aerodynamics. The basic equations needed to simulate airflow over a complete aircraft free of any approximations are extremely complex, with up to 60 partial derivative terms (ref. 172). Only supercomputers with speeds about 25 times greater than that of current computers can handle such equations. The real bottleneck, though, is in programming. In its final report on Aeronautical Research and Technology Policy (1982), the President's Office of Science and Technology Policy identified precisely the reason why NASA centers like Ames and Langley were having trouble acquiring the software they needed: "Software development by in-house staff is extremely difficult because the government cannot attract and retain . . . personnel in this area. The result is a large contract effort to develop and maintain software. This, unfortunately, results in no in-house expertise. Industry figures indicate that 20-30% of programming costs are required to just maintain software. The standardized wind tunnel data system software being developed at Ames will take at least 35 man-years of programming effort and should have begun before the data system was acquired. Because of funding limitations, this was not possible, and hence it will be three years after hardware delivery before the new system can be used to its full capability." (ref. 173.)

To sum up the debits and credits of contracting for support services: contracting out is often less expensive, especially in base operations; it gives the agency greater leeway in rapidly building up the work force at the start of new programs and phasing it out when the work is completed; it frees government employees from routine chores; and it is often the only way to tap expertise unavailable in the government. On the debit side, contracting out may create a vicious circle: Industry attracts the most capable technical people, the agency must perforce contract for a particular service, and the government ends up indirectly paying its contractors what it could not pay them directly.

But contracting out has still other disadvantages. There is no conclusive evidence that contracting out, except for routine base operations, is less costly than having the work done in house. Contracting is time-consuming and does nothing to relieve immediate manpower shortages. The complexity of the process gives contracting officers reason not to contract out. As the former head of OMB's Office of Federal Procurement Policy observed: "The [A-76] handbook is so complex and detailed, you need a training program to teach people how to use it." (ref. 174.) (He spoke better than he knew; A-76 seminars and training sessions are a thriving cottage industry in Washington.) Finally, government managers are concerned that contracting out can lead to work stoppages because of strikes, and that a decision to contract out is usually irrevocable, even if it is less desirable than keeping the work in house (ref. 175).

In short, no conclusive case in the abstract can be made either for or against contracting out. Yet the percentage of support services contracted out is likely to increase, especially for the more sophisticated services on which technology development agencies depend. One thing is quite clear: Cost is not, and for many years, has not been an overriding factor in make or buy decisions. To repeat, the principal reason why NASA and the Defense Department have contracted for the most sophisticated (and expensive) services, especially those involving data processing, is that there was no other way to get the job done. It needed no Circular A-76 to encourage NASA to rely on the private sector. Long before Circular A-76 was promulgated in 1966, NASA had been routinely contracting out 85 to 90 percent of its Research and Development appropriations and it continues to do so. And this dependence on the private sector can only grow, given the nature of the work carried out at the NASA centers and the technology needed to support it: the short (5 to 10 years) life cycles of computer systems, the costs associated simply with maintaining software, the move to computer-aided design, manufacture, and simulation, and the development of supercomputers able to perform up to one billion calculations per second. There may be some changes in the management of these contracts. For example, many support functions may be

physically consolidated at a single "operations center," just as support contracts themselves may be consolidated into long-term master contracts. But the combination of a slowly declining government work force and work that is both demanding and very expensive means that contracting for support services will remain unavoidable.

There is, then, something paradoxical about support services in a research environment. The output of any service can be measured and, therefore, evaluated. But the legal status of these services is ambiguous. We have found government employees providing commercial services. contract employees performing governmental functions, and a certain disregard for the rule that functions shall not be converted to contract to avoid the limit on personnel ceilings. But too much attention paid to legal issues may cause us to miss an important point. For the center or laboratory director, these functions are among the easiest to control, since input and output can be defined in advance. What is still lacking is some model to integrate support services with the management of the laboratory's internal resources - people, money, equipment. In other words, something is needed to tie a particular function - say, data processing or micrographics with the laboratory's mission. Somebody has to be able to set objectives, develop long-range plans, assemble the resources, and do everything needed to make the divisions within the laboratory work as one. It is to this function of internal resources management that we now turn.

[ocr errors]

CHAPTER IX

What the Research Executive Does

Toward a Theory of Research Management: Search for a Method

By the nature of their work, senior research administrators are not likely to spend their time reading textbooks on public administration. There is never enough time to do everything that needs to be done, there are never enough people available to do the work, and there is never enough money for all the programs needing funding. All too often the motto of the senior administrator someone with the power to hire and fire, to select project work assignments - is: Sufficient unto the day is the evil thereof. The circumstances under which he works leave him neither time nor inclination for a theoretical analysis of the organization in which he has chosen to spend his career. As is not uncommon in large organizations, the people most familiar with their operations are often the ones least able to describe what is going on.

[ocr errors]

This lack of introspection on the part of research officials has had some undesirable consequences. We have no really adequate theory of the organization of Research and Development institutions: indeed, there are no generally agreed definitions of basic or applied research or, for that matter, any consensus on how basic research (however defined) feeds into industrial productivity. Again, it is probably unwise for senior officials to ignore the broader implications of what they do. All of us carry a picture of the world around in our heads; but only to the extent that we are sufficiently aware of our assumptions to criticize them can we match them against the world "out there."

Thus most of our knowledge of organizations comes from persons on the outside. There are, broadly speaking, two kinds of theories concerning the behavior of organizations, research installations included. The first approach attempts to develop certain extremely general axioms applicable to every kind of organization. A well known example of this approach is March and Simon's Organizations, the core of which is a series of propositions about organizations (ref. 176). We learn, for example, that "both the amount and the locus of uncertainty absorption affect the influence structure of the organization." Again, "the greater the standardization of the situation, the greater the tolerance for subunit

interdependencies." Yet again, "the greater the amount of past experience with a decision situation, the less probable that intraindividual organizational conflict will arise." Readers may or may not find these assertions intuitively obvious. The problem begins when we try to conceive how we could test, let alone validate, assertions of such generality. It might even be the case that a theory which accounted (say) for the operations of a retail chain would, for that reason be inadequate to explain the working of a laboratory. The most we can say is that, while these theories might be confirmed by experience, they are logically anterior to experience.

This is not to deny that much of what March and Simon assert is of considerable importance. In particular, we accept with some reservations the central thesis of Organizations that “most human decision-making. whether individual or organizational, is concerned with the discovery and selection of satisfactory alternatives; only in exceptional cases is it! concerned with the discovery and selection of optimal alternatives. (ref. 177.) Given the constraints under which research executives normally labor, they will indeed have reason to prefer the alternative which is satisfactory to one which is optimal. To use the word coined by March and Simon, they will "satisfice;" in research, as elsewhere, the best is often the enemy of the good. But to understand how a research organization really behaves, we need something more descriptive, less abstract, and less on the theoretical plane.

A second approach is inadequate in a different way. This approach claims to be empirical: The theorist puts a hypothesis before us and then tells us to look and see how admirably it squares with our experience. A famous example of this approach is Luther Gulick's acronym POSDCORB defining the work of a chief executive: Planning. Organizing, Staffing, Directing, Coordinating, Reporting, and Budgeting (ref. 178). Many a member of the Senior Executive Service, reading Gulick's analysis, might well feel drunk with power! The problem with this formulation is not that it is wrong-chief executives do all the things Gulick claims they do but that it is incomplete. Executives. such as the research management people with whom we are concerned. are limited in many ways: by lack of funds, by agency mandates which they cannot significantly change, by government-wide policies over which they have no control. A purely formal description of what executives are supposed to do will be doubly misleading. It will say nothing about the constraints just mentioned, and it will ignore the informal strategies executives normally employ to achieve their ends.

There is, however, a third approach, which we tentatively advance. For purposes of analysis, it is possible to classify organizations in various ways. One such attempt, by the British sociologists Burns and Stalker in the early 1960s, is particularly suggestive. Based on a study of the British

« PreviousContinue »