Page images
PDF
EPUB

B-247844

Most Agencies Had
Strategic Plans and
Collected a Wide
Variety of Measures

general results are indicative of the overall status of performance measures in federal agencies.

We did our work from June to December 1991 in accordance with generally accepted government auditing standards. Our results are based on interviews with selected agencies as well as survey information; we did not attempt to verify information provided by each of the 102 agencies. Appendix II contains more details regarding our scope and methodology, and appendix III is a copy of the survey sent to agencies.

Most of the agencies reported that they had strategic plans and collected a wide variety of program performance measures. About two-thirds of the agencies (67) said they had a single long-term plan that contains goals, standards, or objectives for the entire agency or program. In addition, over three-quarters of the agencies (78) indicated they had long-term plans at the subcomponent level to set goals, standards, or objectives for their programs.

Nearly all agencies said they measured a range of performance, such as program inputs, work activity levels, and program outputs. Over 80 percent said they also collected internal quality and timeliness measures, and more than half measured external customer satisfaction, equity of service availability, or program outcomes. In all, over 82 percent of the agencies said they collected measures covering at least parts of their activities in 7 or more of the 11 broad categories of measures listed in the survey. Figure 1 shows the number of agencies and the different kinds of measures that the 102 responding agencies reported they use.

[merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small][merged small][merged small][merged small]

Note: The number of agencies responding was 102.

Note: Definitions of the measures in this figure appear in appendix III.

Most of the performance measurement data agencies collected were
reported internally. For example, the Farmers Home Administration named
30 objectives which it measured regularly, but did not report to Congress
or OMB. As a result of this limited reporting of measures, many
policymakers have been unaware of much of the existing data. Moreover,
managers within an agency might have also been unaware of existing
measures if the data was only reported on a program level. In many cases,
program level information-such as the numbers of tax returns
processed-is different from what is useful at higher levels-such as the
extent of noncompliance with tax laws due to confusion over the written
instructions.

A Department of Labor study of federal agencies administering education and training programs reported that even in cases where program outcome data were collected, they appeared to serve no more than informational

B-247844

Many Agencies Visited
Used Performance
Measures to Assess
Organizational
Accountability

Federal Transit
Administration

Federal Aviation
Administration

purposes. This finding is supported more broadly by our survey results.
Relatively few agencies that responded to our survey said that they
reported key information to Congress or OMB. For example, of the agencies
that collected information on external customer satisfaction, 44 percent
reported this information to Congress and 32 percent to OMB. Likewise, of
those that collected program outcome data, 68 percent reported this
information to Congress and 54 percent to OMB.

Our interviews and a Department of Labor study of the use of employment
and training performance measures in 39 federal programs indicated that
measures typically were generated and used by program level units within
an agency and focused on measuring work activity levels and outputs at the
subcomponent level. Our interviews also revealed that in some cases, such
as in grant-making agencies, performance measures were used for
statutory compliance.

The following examples taken from our visits to the Federal Transit
Administration (FTA), formerly the Urban Mass Transportation
Administration, and the Federal Aviation Administration (FAA) show how
these agencies have used performance measures to achieve accountability.

FTA, in the Department of Transportation, provides grants to states and localities to help develop, maintain, and operate their mass transit systems. An official said that to track its grant-making activities, FTA created a series of indexes that served as standards to assess the grant-making status among its regional offices. The indexes were based on measures of specific work activities such as the number of grants developed, grants managed, transportation improvement program reviews, and triennial reviews. While these measures were related to the efficiency and compliance efforts of the agency's grant-making activities, they were not used to assess progress toward its strategic plan or that of the Department.

As a regulatory agency within the Department of Transportation, FAA said it used a system of performance measures to assess overall organizational accountability toward its mission of fostering a safe, secure, and efficient aviation system. The use of existing data provided information to be used for general management purposes instead of control of individuals or units.

B-247844

FAA reported that it focused on programs and activities that promote safety by using performance indicators of ratios and comparisons. FAA used these historical trend measures to compare the targets set annually to measure agency progress. Typical indicators included year-to-year comparisons of security inspections, air traffic delays, and pilot deviations.

FAA said it delivered electronic monthly reports in an executive information system and prepared quarterly paper reports that contained concise information and were widely circulated among senior management. Managers were to use this information to get an overall sense of how FAA was doing.

Many Agencies Used
Measures to Manage
Operations

Department of Defense

In order to see how well resources were being managed to accomplish tasks, many of the agencies we visited used performance measures to help make budget decisions, to assess employee performance, and to provide incentives. On our visits, we found more ties to individual employee assessment than any other management use. This was supported by survey results, which show over one-third of the agencies required the use of performance measures in senior management performance contracts. Three examples of agencies that use performance measures to manage operations are the Department of Defense (DOD), the National Archives and Records Administration, and the Job Training Partnership Act (JTPA).

At the time of our visit, the Office of the Comptroller of DOD had begun to determine the unit costs of selected support activities, such as recruiting and supply management, in order to identify efficiencies and to make budget decisions. With operation and maintenance outlays of about $85.7 billion in fiscal year 1992, “unit cost resourcing" was intended to help DOD reduce the costs of doing business by helping managers identify the costs of delivering service outputs and by helping them understand the long-term and indirect costs of producing specific outputs. With this knowledge, unit costing was intended to serve as a decision support system that could put DOD closer to budgeting a specific set of activities on the basis of what it actually costs to do the job.

According to a senior DOD official, unit cost resourcing was used as a tool to improve management with a focus on output, which requires employees to know what they produce, identifies customer-provider relationships, causes workers to examine the process for needed changes, and creates better cooperation between management and employees. According to

[blocks in formation]

B-247844

National Archives and

Records Administration

DOD'S Comptroller's Office, these efforts called for a change in the
management culture to develop different expectations of management.

The work of the Archives consists of responding to requests for historical
records and preserving those records. At the technician level, this labor
intensive work involves repetitive tasks and takes place in a nontraditional
workplace setting where supervision is difficult because staff have to
search for records located in many places throughout the building and may
be gone from their workplaces for hours. For these reasons, the Archives,
with the support of its agency head, said it chose to measure individual
performance by using industrial engineering methods. This required
setting optimal standards of how long various tasks took to complete.

According to officials, operational technicians in two offices, the Office of
National Archives and the Office of Federal Records Centers, started using
engineered standards in 1985. The two offices that used the system
covered about 11 percent of the Office of the National Archives' full-time
equivalent employees and about 12 percent of the Office of Federal
Records Centers' full-time equivalent employees.

Officials said technicians were expected to meet the established standards,
which were given to them by management. For example, a technician
might have been expected to complete 42 genealogical searches of a
particular type in an 8-hour period. At the time of our visits, the Archives
was in the process of writing detailed manuals describing the procedures
for accomplishing tasks and how long each task should take.

The Archives's standards for routine tasks were the basis for quarterly
incentive bonuses given to employees in the GS-4 to GS-6 range. When
technicians reached or surpassed these standards, they were rewarded
monetarily. Quarterly bonuses ranged from $250 to $400.
Performance-based action had been taken against technicians who
repeatedly failed to reach the standards.

The Archives had contracted for a series of studies to develop their
engineered standards. For example, the first study in 1985 developed an
engineered standard for retrieving requests for Revolutionary War Military
Service records. This new standard was 63 percent higher than the
traditional standard for record retrievals. In 1991, the Archives examined
actual productivity figures for many of the work units covered by
engineered standards and found, in most cases, significant increases in

« PreviousContinue »