First, Do No Harm

April 2007

Alexander C. McCormick

In a timely essay, the author reminds us that launching an accountability initiative without careful thought to how it will affect behavior can do more harm than good.

 

Accountability is in the air, and the news, these days. In response to various "common-sense" proposals to fix problems in education, a friend of mine used to say, "If you think there's a simple solution, you don't understand the problem." Recent accountability proposals show how true this is. In any accountability regime, it's not sufficient to simply select a set of performance measures. It's equally important to consider how the system will affect behavior. A well-designed accountability system motivates substantive change, not merely gaming the system. And the last thing you want is a system that undermines useful diagnostic tools in the name of accountability.

In January, the National Center for Education Statistics proposed some additions to the mass of data that it gathers annually from colleges and universities. Normally an arcane subject, to be sure. But buried in the proposal was a provision—clearly motivated by the Secretary of Education's Commission on the Future of Higher Education—that could seriously hamper current efforts to improve college quality. Russ Whitehurst, director of the Department of Education's Institute for Education Sciences (which houses the statistics agency), subsequently offered vague assurance that the most damaging of these proposals would probably not be implemented. Let's hope he's right.

The last twenty years have seen calls for greater accountability by higher education, accompanied by growing influence of college rankings by U.S. News and World Report. College officials complain that the rankings, which purport to measure college quality, improperly emphasize inputs and resources rather than what happens on campus. But in response to accountability demands, they argue that the work of their institutions is too complex, too varied, and too ephemeral to be reduced to simple output measures. Although there is merit to both claims, the quest to improve college quality is far from hopeless.

Several relatively new college-quality initiatives show such promise that they were named by the Secretary's Commission. Colleges and universities participating in these projects have access to sophisticated assessments of effective educational practices (from the National Survey of Student Engagement, or NSSE, and its community college counterpart, CCSSE) and of their students' critical thinking, analytic and writing skills (from the Collegiate Learning Assessment, or CLA). NSSE and CLA send participating institutions confidential reports showing how they perform relative to their peers; CCSSE posts results on its website. This is valuable information that presidents, deans, department chairs, and faculty members can—and do—use to improve the quality of college education.

But the Commission and the Secretary want more information that students and parents can use to compare institutions. The Secretary often complains that she has access to more comparative information when buying a car than when investing in her children's college education.

So the statistics agency proposed adding an "accountability" section to its annual compilation of college and university data. In the first phase, colleges would be asked which assessments they participate in, whether they post the results online, and the corresponding Web address. So far, so good—many institutions post this information, and this would make it easier to find. The mischief begins in the second phase, wherein institutions report the assessments they participate in and their "score" on each one. Knowing which assessments a college uses is a good idea, but reporting scores to the government will do far more harm than good.

Why? Let's set aside the problem of reducing complex assessments to a single institution-wide score. (If you had one score for every auto maker, would that help you choose the best station wagon?) The real danger is transforming a diagnostic exercise into grading and ranking. It's one thing for college officials to have a confidential report from a sophisticated assessment identifying where improvement is needed. It's quite another when that information is made public; the emphasis shifts quickly from diagnosis to damage control (although CCSSE results are public, community colleges do not compete in national and regional markets the way four-year institutions do). And recall that these are voluntary assessments that institutions pay to participate in. If your doctor and financial planner posted your physical and fiscal health on the Web, would you see them more often? Would you see them at all?

It doesn't take much in the way of critical thinking skills to see where this leads. If the Department doesn't produce rankings, others will. In NSSE's case, students' survey responses will determine their college's standing, and by extension, the value of their degree. So they will act in their own self-interest to make their college look good, compromising the fundamental requirement for useful information: candor. More likely, though, colleges will simply opt out, as they surely will for performance-based assessments like CLA, because participation would risk too severe a public-relations penalty. Thus would an ill-conceived push for consumer information drive colleges away from the most promising assessment and improvement initiatives in decades.

Higher education institutions must systematically assess and improve their performance. But not all diagnostic information is suitable for accountability and consumer information, and a ham-fisted approach like this could sabotage important efforts to diagnose and improve colleges and universities.