Like the newspaper retraction that goes relatively unnoticed — at least compared with the original sensational article — the recent announcement that the Department of Education had determined Western Governors University was not a correspondence school after all received relatively little attention.
For people new to the world of competency-based education (CBE), it may be difficult to comprehend the chilling, even paralyzing effect of the finding of the Office of the Inspector General in 2017 that Western Governors University had failed to meet federal requirements for the interaction between students and faculty members and was therefore offering correspondence education. After all, if WGU — the poster child for mainstream CBE — could be found to be out of compliance (and potentially on the hook to return $713 million in federal student aid), where would that leave the rest of us? The finding hinged on a perversely narrow reading of the requirement for “regular and substantive interaction”: the OIG seemed to be saying that it was not enough that such interaction take place, but also that it be instigated by the faculty member, not by the student. Worse, it set the OIG’s office against the Department of Education itself — which under the Obama administration had been championing innovations such as CBE — as well as the regional accreditors that had approved WGU and other CBE institutions, creating unpredictability for institutions as well as the Department and the accreditors themselves. As a result, applications for direct assessment — originally intended to be the pathway toward CBE — shrivelled up. (Full disclosure: Volta’s Managing Partners were, respectively, the founding Executive Director and Chief Academic Officer of Southern New Hampshire University’s College for America, which was the first institution to be approved for direct assessment.)
There is much to critique in the original audit report, but the most pernicious aspect struck at the heart of both CBE and innovation: the notion that learning takes place only where there is teaching — or even more narrowly, an instructor. The recognition that teaching (an input) and learning (an outcome) are not the same has been a commonplace of education for several decades now, but no one seemed to have told the OIG. In fact, research consistently tells us that the worst possible environment for learning is one that encourages passivity — too often the case in huge lecture classes despite the presence of a professor on the stage. Why did the OIG consider that acceptable?
The accepted and familiar model of passive education has serious real world consequences. Why are so many employers disappointed with the quality of the shiny new graduates who apply for jobs? In part it is because so many students have been deprived of agency and meaningful opportunities for self-direction. What they have learned is how to take (often-multiple choice) tests and write five-paragraph essays, not how to wrestle with complex ideas and apply knowledge to solve real problems. But “self-directed learning” is neither a luxury nor some newfangled idea. You can trace it back to Dewey (maybe even Aristotle) through Knowles and beyond. And at the end of the day, in some sense all real and deep learning is — and has to be — self-directed; no one else can learn for you.
But the possibility that learning — and, let’s be honest, education worthy of taxpayer money — might not defined by the presence of a faculty member is an affront to the traditional faculty corps. They and others worry that if learning is not defined by faculty-student interaction, the gates of hell will open and students (and by extension the taxpayers) will be subject to abuse by diploma mills and other corporate malefactors eager to take advantage of deregulation and laissez-faire “oversight.” There is indeed legitimate concern about the specter of bad actors (primarily for-profits) taking advantage of lax rules — as occurred when the rules for online learning were loosened up. Especially in the current climate, it is not paranoid to wonder whether what is called “enabling innovation” is just a cynical ploy to enable corporate, not student, interests. All this will be thrashed out in the “neg reg” process — the negotiated rulemaking now underway. But here’s what we can’t lose sight of: the status quo is unacceptable. It fails too many students. It is too expensive, too inflexible, too ineffective — and leaves too many students with neither credentials nor competencies, just debt. This is not a small or isolated problem. The vast majority of college-going Americans attend non- or marginally selective schools that have lousy graduate rates. In fact, almost half of them do not graduate. They arrive unprepared for college and leave unprepared for work.
There is another choice: for higher education, including the Department and accreditors, to do the hard work of defining, assessing, and communicating learning. What, at a minimum, should someone who earns a specific credential at College X know and be able to do? How has that learning been demonstrated and measured? Credits are not very helpful in this regard; they communicate nothing about expectations for and evidence of learning. And if you’ve somehow missed Amy Laitinen’s wonderful article on the origins of the credit hour, also called the ”Carnegie unit,” take time to read it now.
This is a problem we can solve. Competency-based education, at its best, provides a way, by insisting on transparency, evidence, and relevance — and by opening up the possibility that the presence of a faculty member is less important that the functions that the faculty have traditionally served, e.g., subject matter expertise, curriculum development, assessment, coaching. Of course, faculty can be and often are an important part of the equation — but they are not the reason for higher education: student learning is.
By Kate Kazin
Recent Comments