Page images
PDF
EPUB

Commission has also cooperated with the Department and the states in matters involving possible violations of Title IV and fraud and abuse.

In the remainder of my testimony, I will focus particularly upon the Commission's use of performance-based measures to assess and promote educational quality. I will describe the ongoing analysis of outcomes data that the Commission has performed in conjunction with the Center on Education and Training for Employment at The Ohio State University. I will explain ACCSCT's approach to outcomes assessment and express its view that there should be greater emphasis on the intelligent application of meaningful performance measures to affect all institutions' participation in the student aid programs. I will also explain why ACCSCT believes that private accrediting agencies have an important role to play in assessing outcomes data and applying the judgment that is necessary to determine whether institutions are making effective use of federal student aid funds. And, I will offer suggestions for re-orienting the role of accrediting agencies as gatekeepers and modifying the statute to better allow accrediting agencies to perform their gatekeeping role.

PERFORMANCE-BASED MEASURES

Long before the enactment of the Higher Education Amendments of 1992, the Commission adopted and applied accrediting standards requiring schools to achieve reasonable and acceptable levels of completion and placement. In order to improve its own assessment capabilities and the performance of accredited schools, the Commission engaged the Center on Education and Training for Employment at Ohio State in 1990 to undertake an independent, ongoing analysis of data from the annual reports submitted to the Commission by accredited schools. This analysis determines the performance of accredited schools with respect to graduation rates, withdrawal rates, placement rates and default rates. The study also determines whether there are school characteristics that have a

significant relationship to these performance measures. The definitions of outcomes measures have been independently reviewed, and data from the annual reports are verified through random site visits. Variables have been identified through multiple regression analysis that have consistent, statistically significant relationships with graduation, withdrawal, training-related placement, and default. These findings are being used to assist the Commission in evaluating and attempting to help improve the performance of accredited schools.

The Center has prepared three reports on the performance of the Commission's accredited schools over a five-year period. The latest report was released in April 1996 and provides for the first time longitudinal data for cohorts of students who would have had sufficient time to complete their programs in the acceptable time frame of 150% of program length. The Center's report is attached to my testimony as Exhibit A, and a summary of the report is attached as Exhibit B.

The new cohort data available in this year's report show the following:

[ocr errors][merged small][ocr errors][merged small][merged small][merged small]

In general, the longitudinal nature of the cohort data yields a more careful tracking and counting of

students.

The April 1996 report also provides an additional school year of data on outcomes. The

report shows that for full-time students:

[ocr errors]

64% graduated within the acceptable time frame for completion of the program.

78% obtained jobs in the fields for which they were trained.

20% withdrew from their programs.

As expected, for part-time students, the graduation and training-related employment rates were 1012% points lower than the rates for full-time students, and the withdrawal rates were 1-2% points higher. The average default rate in the Commission's accredited schools was 22.7%, the lowest rate over the last five years.

A key objective of the analysis of annual report data on outcomes was to determine the characteristics which have consistent, statistically significant relationships with these outcomes measures of school performance. The Center performed a multiple regression analysis to determine the net, independent effect of 39 different characteristics. As a result of this analysis, six factors

[merged small][merged small][ocr errors][merged small][merged small][merged small][merged small][merged small]

These characteristics can thus be used as monitoring signals to identify schools that are more likely to have outcomes problems. Since default rates have occupied the attention of policy makers to such a degree in recent years, it should be noted that a school's default rates did have some correlation with its withdrawal rates and with changes of ownership and the number of Ability to Benefit students. Low default rates also correlated with longer programs. However, the analysis has found that default is not a complete or reliable proxy for overall school performance. Accordingly, undue

emphasis has been placed on default rates when other characteristics appear to have a more significant effect.

The Commission has been using the results of the outcomes analysis to improve its assessment of outcomes and schools' performance. The Commission's requirements with respect to student achievement are found in Section VII (C) of its Standards of Accreditation. A copy of this standard is attached to my testimony as Exhibit C. Under the Commission's outcomes standard, a school must demonstrate successful student achievement including reasonable completion, placement and, where required, state licensing examination outcomes. Successful student achievement is demonstrated principally by rates of completion, placement in the field for which the education and training have been provided, and passage of state licensing examinations. The Commission analyzes these rates for each program offered and for the school as a whole. The rates are verified through student files, the school's records of employment of its graduates and other means. The Commission's visiting teams have the principal responsibility to conduct this

verification.

ACCSCT avoids the use of trip-wires in assessing outcomes. Commissioners have the expertise and background to form judgments as to whether a school's completion, placement and state licensing examination pass rates are low in relation to comparable schools or programs. This judgment is now further informed by the results of the annual report outcomes analysis performed by the Center at Ohio State. In the event that a school's outcomes rates are low, it has the opportunity to demonstrate that the achievement of its students is nonetheless successful by explaining economic conditions, location, student population served, length of program, state requirements and other external factors that may reasonably influence student achievement.

However, if a school with low outcomes rates fails to demonstrate successful student achievement, the Commission will deny accreditation or remove a school from the accredited list. Since 1988, poor outcomes have been a ground for such an adverse action in 60 instances.

In the rulemaking to implement the Higher Education Amendments of 1992 and now in the Advanced Notice of Proposed Rulemaking issued by the Department of Education to develop proposals for regulatory relief for institutions demonstrating a high level of performance, it has been suggested that outcomes trip-wires be used to define acceptable school performance. For example, it has been proposed that the requirement that short-term vocational programs graduate and place 70% of their students should be extended at least to non-degree vocational programs and possibly

others.

Based upon its experience, the Commission believes that the use of minimum quantitative trip-wires in the area of student achievement is simplistic and fails to take into account the important factors revealed by the annual report outcomes analysis and reflected in the Commission's accrediting standard on outcomes. While it is important to begin with the best quantifiable data available on outcomes, other factors are relevant and must be considered. Moreover, the use of minimum trip-wires likely will have unintended and pernicious consequences. Trip-wires may well create a safe harbor for schools and would likely have the tendency in practice to become maxima, thus making demands for higher performance more difficult. And, while minimum trip-wires appear to have the benefit of being definite, that appearance is illusory because numerous questions critical to the definition of completion and placement would still require the application of informed judgment. In the Commission's experience, the assessment of student outcomes cannot be reduced to a mathematical formula. The application of knowledgeable and reasoned judgment is inescapable.

« PreviousContinue »