Page images
PDF
EPUB
[graphic][merged small][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed][subsumed]

FIGURE 44. The direct support organization at the NASA-Ames Research Center is shown on this chart.

in the use of more or fewer persons, or personnel at higher or lower levels.

Despite this lack of standards or norms, we might approach the problem of identifying support ratios by comparing them at a number of ostensibly similar technology development centers. In table 8, the functional level general support ratios at Ames are compared with those obtained in a survey of industrial laboratories undertaken in the early 1970s, and for nine Navy laboratories based on data furnished to one of the authors (Mark) by the Director of Naval Laboratories.

[blocks in formation]

1. Data for nine Navy labs provided by Director of Naval Laboratories, Dept. of the Navy, Washington, D.C.

2. The Industrial Survey is reported in: Jones, Richard A., Research Administration: Its Relative Size in the Organization. SRA Journal, Spring 1974.

At first glance, one would note the relatively good agreement between the ratios for Ames and those of the surveys. The problems begin when we try to interpret the data. For example:

• Though great care was exercised, there was considerable difficulty in matching the definitions of functional categories at Ames with those of the two surveys.

• Even where functional definitions could be relatively easily reconciled, there was no assurance that the activities of the functional organizations were really comparable. An example was administrative computing, which was subsumed under Ames' financial management function, but was not mentioned in the information available for the two surveys.

The definition of "similar," in comparing the Ames Center with the firms covered in the survey, was unclear. For example, the industrial

organizations all had fewer than 1 000 employees, while Ames had about 2 500, including support contractors.

Some organizations resided within another facility and may have shared certain general support functions. An example was the autonomous U.S. Army Aeronautical R&D Laboratory, a tenant at Ames Research Center, which used a number of the Ames general support services, such as the library, technical computing, and security.

• An organization with dispersed facilities may have some centralized functions. Again, the Army Aeronautical R&D Laboratory is an example, with personnel and payroll offices in San Francisco.

• Detailed comparative data were difficult to acquire, as shown by the blanks in the Navy laboratory data (table 8) and the paucity of published papers on the subject.

There are two gross ratios for which some comparative data are available. These are the general support ratio as defined earlier, and the administrative and nonprofessional support ratio. The latter can be defined as the ratio of total complement less all scientists and engineers to total complement. The available data are shown in table 9. The problems listed earlier still apply. The rather large difference between the NASA and military administrative and nonprofessional support ratios cannot be reconciled on the basis of available data. It may be that the NASA centers employ more professionals in supporting roles than do the military installations, rather than that the work of military laboratories requires larger nonprofessional supporting staffs.

[blocks in formation]

The Air Force, Army, and Navy ratios are derived from a report of hearings before a Subcommittee of the House of Representatives Committee on Appropriations, 93rd Congress Investigative Report on "Utilization of Federal Laboratories." USGPO, Washington, DC 1974.

Measuring Productivity: The Job Analysis Method

Our first attempt to measure the productivity of support personnel seems to have left us where we began. It seems that the use of support ratios may be of limited value in managing general support; that problems of comparison across organizations are formidable; that detailed data are generally unavailable; and that developing criteria for making decisions on the basis of these ratios has not been possible. Support ratios may have some value in reviewing variations in general support with time within (rather than across) organizations. We had better forget about support ratios, and consider how best to measure the productivity of selected general support services. In principle, it should be easier to measure the adequacy of support services, which are normally discrete and repetitive, than to measure the quality of research ideas or personnel. In the case of the former, we have a simple criterion by which to proceed — output. More precisely, it should be possible to take any support activity and, by means of a detailed analysis, determine acceptable performance. Such an analysis would require three steps: (1) enumerate the steps required to perform the work; (2) list those things - input needed to perform the work; and (3) list those things produced by the work — output (ref. 160).

But as soon as we begin to consider any particular service, it becomes apparent that evaluating the outputs of even the simplest service is no easy matter. Consider, for example, a taxi or shuttle bus for taking employees from one part of a large research installation to another. How would we measure the quality of the service? We could develop any number of criteria as, for example, that a passenger should be picked up within four minutes of the agreed time, or that a vehicle should not be out of commission more than X hours in a given period, or that a certain number of drivers should be available on all working days. Even as simple a case as this demonstrates three things: To be evaluated, a service must be broken into discrete components; to each activity there must be assigned a quantifiable standard, along with the acceptable deviation from that standard — in other words, an acceptable quality level; and tradeoffs between standards must be made.

[ocr errors]

To avoid misunderstanding, some cautionary words are in order. While, for example, personnel and procurement certainly qualify as general support, they are so intimately related to the public interest that they cannot be easily separated from an agency's mission, as we might do with operating a mess hall or a supply depot. These latter are commercial activities in the sense defined by the Office of Management and Budget as "work that is separable from other functions or activities and is suitable for performance by contract." It is these activities which lend themselves, whether as direct or general support, to job analysis; and it is in this context that we are applying the method.

It is also necessary to warn against a simple-minded application of job analysis or any other method. Imagine this method applied to analyzing what a symphony orchestra does. An analyst might observe that for long periods the second clarinetist had nothing to do, that the tympanist only played repeated notes, and that the winds only repeated what the strings had introduced. Such an analysis would be correct as far as it went, but would simply miss the point of the performance. No method can substitute for judgment or a knowledge of what is being evaluated. Or as one writer put it, some works are like mirrors; if a donkey looks in, no apostle will gaze out.

A brief case study will serve to make these points clearer. One of the authors (Levine) was commissioned by the National Oceanic and Atmospheric Administration (NOAA), an agency within the Department of Commerce, to draft a plan to evaluate prospective contractors who would manage NOAA's supply operations. NOAA is itself an agency made up of other agencies, of which the largest and best known is the National Weather Service. All of these agencies are supported by NOAA warehouses which stock instruments, electronic equipment, common use technical and administrative forms, and NOAA publications, handbooks, and operating manuals. The largest of these warehouses, the NOAA Logistics Supply Center, is located in Kansas City, Missouri. For the moment, we can disregard the agency's intention to contract out the management of its warehouses; the performance criteria would be identical if the system was managed, as in fact it is, by government employees. The question remains: How can NOAA evaluate what is, in effect, a range of support services?

From what has been said, a general approach to evaluating NOAA's supply depot can be easily described. For each service, develop a standard; assign an acceptable quality level for the performance of the service; and design a surveillance method to determine if acceptable quality levels have been met (ref. 161). In practice, the task of drafting a quality assurance plan is a little more complicated. The Logistics Supply Center stocks some 8 600 line items, in addition to sophisticated one-of-a-kind equipment furnished to the National Weather Service; some items are inactive, while there are shortages of others; and in other cases, information on items in stock may not be readily accessible to users. Moreover, a quality assurance plan, to be effective, must be capable of being entered into a data-processing system; otherwise, the supply system will temporarily collapse whenever the one or two persons who carry it in their heads leave. The plan, as approved, allowed for the complexity of the system. Supply operations were broken down into some 65 to 70 discrete activities; to each was assigned a performance standard and an acceptable quality level; finally, one of three surveillance methods random sampling, 100-percent inspection, and customer

« PreviousContinue »