Page images
PDF
EPUB

Hospital Mortality Rates

INTRODUCTION

Differences in patient death rates seem on their face a valid way to distinguish good quality health care providers from poor quality providers; death is an outcome that is almost always bad,1 and medical practice is devoted, at least in part, to postponing death. Differential mortality, or survival, has long been used as a measure of efficacy in health care technology assessments and as an indicator, albeit crude, of the health status of particular populations. Medical encounters can be dangerous (318,555,595), adding to the possibility of death from a hospital encounter.

Almost half the deaths in the United States every year occur in hospitals, although only about 3 percent of hospital admissions end in death (667). Although many deaths in hospitals occur because nothing more could be done for the patients involved, a substantial portion of the deaths are believed to be avoidable. Hospital-related mortality can result from various factors that are subject to control, including poor infection control, inadequate or inappropriate use of medication, falls as a result of poor supervision, mistakes during surgery, and inappropriate discharge.

Although the use of patient death rates to compare the quality of care delivered by specific health care providers has been expanding, it has also been controversial. The major problems with the use of hospital mortality rates as a quality indicator are that mortality can result from many factors other than poor quality care and that techniques to adjust for such factors are generally inadequate. In addition, there are theoretical and practical issues regarding the appropriate period of time for an analysis. Over what period of time is a death to be defined as related to hospital care? Another issue regarding time is the period covered in the analysis. Most releases of information on hospital mortality rates have included data for a single year, but critics argue that data over a longer period of time may be needed, given the

'It has been argued that in some cases death would be preferable to life; definitions of life and death are not as simple as they once seemed (632).

uncertainties about the indicator. Yet another significant issue is the level of aggregation of hospital mortality rates. Should rates be aggregated across the hospital as a whole? If not, at what level of diagnostic coding should the data be totaled? Finally, it is important to validate hospital mortality rates against criteria related to the process of care; this validation is only beginning.

Perhaps the most visible and controversial releases of hospital mortality data have been the 1984 and 1986 analyses of the Health Care Financing Administration (HCFA), which is part of the U.S. Department of Health and Human Services (640,647). The HCFA releases illustrate well the critical issues surrounding the use of hospital mortality rates as an indicator of the quality of care. Both analyses were conducted with data derived from hospital claims filed for the purpose of Medicare reimbursement, although the 1986 analysis added information about deaths derived from Social Security Administration files (see table 4-1 for a summary of differences between the 1984 and 1986 HCFA analyses). The 1986 analysis differed in level of analysis, in the way conditions and procedures were aggregated, in the period of time after hospital admission during which hospitals were counted, in calculation methods, and in the type of information released.

A number of other analyses of hospital mortality data have been conducted along the same basic lines as the HCFA analyses, that is, using data from hospital discharge abstracts to adjust for patients' risk of dying (80,81,189,448,462,526); other analyses have adjusted for patients' risk of dying using clinical data (190,352,353,588,589, 590) as well as proxies such as age. Few have attempted to validate statistical results against a process criterion (190,279,353,462).

OTA reviewed in depth studies whose purpose was to develop a valid technique to adjust hospital mortality statistics for patients' risk of dying. Not included were studies whose primary purpose was to test the validity of structural measures of

Data base

Table 4-1.-Comparison of HCFA's 1984 and 1986 Hospital Mortality Analyses

Hospital population

Patient population

Period of time during which deaths were counted Hospital "risk group"d Measures used to adjust for patients' risk of dying

Level of analysis Levels of aggregation

Calculation method Information released

[blocks in formation]

aU.S. Department of Health and Human Services, Health Care Financing Administration, "Medicare Hospital Mortality Information 1984," Washington, DC, Mar. 10, 1986. BU.S. Department of Health and Human Services, Health Care Financing Administration, Medicare Hospital Mortality Information 1986 (Washington, DC: U.S. Government Printing Office, 1987).

CAssembled in HCFA's MEDPAR file.

dDenominator.

SOURCE: Office of Technology Assessment, 1988.

quality against hospital mortality as a standard. In addition, the OTA review included releases of crude mortality rates (55,115,116,478) to compare their results with the rates adjusted in various ways. All studies were reviewed using the procedure and checklist described in appendix C.2 Table 4-2 lists the studies reviewed by OTA, and indicates when they were conducted, the sources

'The way studies were selected for review and descriptions of the individual studies can be found in OTA's technical working paper, "Hospital Mortality Rates as a Quality Indicator" (187).

of data used, the patient and hospital types that were included, and the years in which data were collected. Table 4-3 shows the diagnoses and procedures included in the analysis, when death was measured, the adjustments for patients' risk of dying, the level of analysis, and the results of each study.

The remainder of this chapter consists of an evaluation of the reliability, validity, and feasibility of using hospital mortality rates as an indicator. Conclusions and policy implications are outlined in the final section of the chapter.

RELIABILITY OF THE INDICATOR

Whether hospital mortality rates are a valid indicator of the quality of care depends on the reliability of the data on which analyses of mortality rates are performed and the reliability of the data against which results of analyses are validated.

Some aspects of the data base for hospital mortality analyses have been of longstanding concern (166,167). There is reason to believe that hospital data sources vary widely in completion and accuracy; rarely have hospital mortality analy

[blocks in formation]

Table 4-2.—Characteristics of Hospital Mortality Studies Reviewed by OTA

[blocks in formation]

Roemer, et al., 1968 (526)

All nonobstetric

State of California hospital annual reports

[blocks in formation]
[blocks in formation]

a Studies are listed in chronological order. Numbers in parentheses refer to numbered entries in the reference list at the end of this report. The hospitals were chosen to represent range of medical staff organization types (loosely to highly structured).

CData were collected either on consecutive patients or on every second or third patient until a specified number of patients was reached.

dNA

= Not applicable.

eHospitals were nonteaching, nongovernmental, and proprietary.

SOURCE: Office of Technology Assessment, 1988.

1,244 hospitals; 558,856 patients 17 hospitals; 8,593 patients

1970-75 (6 years) More than 200,000

Not given

surgeries

Average of 5 monthsc

13 hospitals; 236 patients

1984

April 1984-March 1985 (1 year)

45 hospitals,

8,745 cases

1984

Not given

1980-84

Six-month period in 1985

340 hospitals;
2.5 million babies

93 hospitals;
205,000 hospital
discharges

10 million

admissions

1986

a. 1983-84

a. 300 hospitals

b. 1984

b. Not given

Table 4-3.-Results of Hospital Mortality Studies Reviewed by OTA

[blocks in formation]
[blocks in formation]

Table 4-3.-Results of Hospital Mortality Studies Reviewed by OTA-Continued

[blocks in formation]

Knaus, et al., 1986 (353)

Patients in intensive care units

Inhospital death

APACHE II (acute physiologic
scores; age points; chronic
health points; emergency sur-
gery status)

Patient

R2 = .99

[blocks in formation]
« PreviousContinue »