President’s Monthly Message

This brief essay is another in a monthly series of conversation-starters about key aspects of our work as an accrediting agency. It will be updated at the beginning of each month, covering such topics as the role of sanctions, the work of a Commissioner, and our relationship with the US Department of Education. We will feature new initiatives and address important challenges that we face. Each month’s entry will then be archived for future reference. My goal is to enrich your understanding of this important aspect of higher education in our region. I hope you will find reasons to return regularly to this page. And I welcome your feedback.

Richard Winn, Ed.D.
President

September 2019

By the Numbers

In the work of accreditation, numbers really do matter.

For example, we strive for consistency in making decisions about the accredited status of member institutions. These are highly consequential decisions and we want to avoid any sense that a Commission’s action has been arbitrary, impulsive, or biased for or against any institution. Wherever appropriate, we inform our decisions by relying on carefully selected metrics – objective, comparable numbers that indicate an institution’s levels and trends in key performance areas.

An equally important use of numbers, primarily for use by the institutions themselves, was foreseen in the work of the quality guru, Edwards Deming, who championed the notion that, if something can’t be measured, it can’t be improved. The college’s Institutional Research (IR) department is busy cranking out data on institutional trends, demographic performance gaps, sector comparisons, resource allocations, student achievement, fiscal standing, stakeholder feedback, and dozens of other metrics. Decision-makers at every level – especially governing boards – use these to ensure the achievement of the college’s mission and to foster improved effectiveness in every measurable area.

I fully endorse the clear and growing national call for higher education to rely more on the wise use of data. And this applies to accreditors with specific potency. I would, however, offer some caveats about the use of data in the accreditation process.

The illusion of consistency. While “consistency in decision-making” is a worthy goal, it often begins with the assumption that institutions are identical. I hear it argued that institutions performing at the same levels should receive the same accreditation decisions. This assumes that all institutions are identical in mission, circumstance, context, resources, history, and student demographics, and thus can be compared directly – apples-to-apples – with each other. This simply isn’t the case. Accreditors are constantly reminded of the distinctiveness of each member. The peer review process aims to appreciate the multiple defining characteristics of each institution and the complexity of its own journey toward excellence. This is why every comprehensive review is grounded in the mission and context of the individual college.

Begin, not end, with numbers. A college’s IR professionals know that “weighing the pig doesn’t fatten the pig.” Gathering the numbers doesn’t, itself, produce improvements. Institutions still need to wrestle with the meaning of the numbers, then stage and monitor the resulting action plans. It would be risky if the numbers an institution is using to improve its performance are used by the accreditor as the primary basis for accreditation decisions. While accreditors require many of those numbers to be reported to the agency as part of its annual monitoring and comprehensive reviews – like a doctor making sure patients pays attention to their blood chemistry as part of a medical check-up – the locus of accountability for using those numbers must remain with the institution. For accreditors to use these data as the primary basis for making accreditation decisions risks shifting the locus of accountability for improvement to the agency. This is not how accreditors work.

Holistic, professional judgments. Accreditation decisions are not made by a computer or determined by an adding machine. They are made by humans who exhibit the remarkable capacity to make holistic judgements. Numbers inform but do not determine those judgments. Professional educators, bringing the seasoned perspectives gained by working within the sector, analyze the meaning of multiple numeric indicators, combined with narrative evidence, to draw summative conclusions. The process is more like a butter knife than a scalpel, which is sometimes troubling to those who long for the certainty of mathematical precision.  While ACCJC has placed more than a few of its members on some form of sanction over the past decade, few if any of those sanctions were based just on numbers; mostly they were based on concerns about institutional practices.

The “bright lines” mythology. There are influential voices in the national conversation about accreditation who urge the notion referred to as “bright line” decisions. Essentially, if an institution’s performance drops below a pre-set, universally applied number (a “bright line”) on one or a few data points, the institution should be sanctioned or have its accreditation withdrawn. There are even those who urge that accreditors who don’t impose these bright lines should have their federal recognition withdrawn. Together with serious concerns about the philosophy of accreditation, the bright line argument requires ignoring the three caveats described above.

ACCJC seeks to incorporate a wise use of numbers into its accreditation practices. This is best illustrated by what we call Institution-Set Standards (ISS). This practice is summarized in these simple steps: (A) The institution understands itself in all its uniqueness and selects appropriate performance indicators in keeping with its mission. (B) It obtains and evaluates its performance as portrayed by these metrics and establishes base line numbers. (C) It sets aspirational performance goals – expressed both in numbers and in plans to move those numbers. (D) It monitors its progress toward those goals by regularly watching the numbers. (E) It owns the responsibility to adjust its plans based on trends and gaps in the numbers. ACCJC requires an ISS report from the institution on both the process and the numeric trends, not as the basis for making an accreditation decision, but for ensuring that the institution clearly owns this process for itself.

Numbers are like a mirror by which an institution reflects on its performance. They are like a lens by which peer reviewers bring an institution into sharper focus. But numbers are not the institution itself.

Month & YearTitleDownload
September 2019Are Accreditors Federal Agents?Download PDF
August 2019Are Accreditors Chained to the Old Model?Download PDF
July 2019What Students Need to Know about AccreditationDownload PDF
June 2019Why “Dodging That Bullet” MattersDownload PDF
May 2019“Learning Community” – It’s a ThingDownload PDF
April 2019Descriptive or PrescriptiveDownload PDF
March 2019Accreditation’s Critics: Taking Them SeriouslyDownload PDF
February 2019Do Accreditors Really Have StandardsDownload PDF
January 2019Take the Fear Out of the SystemDownload PDF
December 2018Is It Bright Lines or Processes?Download PDF
November 2018Peer Review: A Key to American Educational ExcellenceDownload PDF
October 2018Why I Love Community CollegesDownload PDF
September 2018What's the Alternative to Regional AccreditationDownload PDF
August 2018To Sanction or Not to Sanction: That is the QuestionDownload PDF
June 2018The Cost of Accreditation: Calculating the ROIDownload PDF
May 2018All This Work with No Pay: Why Be a Commissioner?Download PDF
April 2018Compliance and Improvement: ACCJC's Dual MissionDownload PDF
March 2018One Region, Two Systems: Why?Download PDF

COMMENT FORM