This multi-part Executive Briefing focuses on issues of relevance to senior decision-makers. Part I looks at the typical objections to end-of-course assessments and suggests they can be addressed in ways that achieve benefits outweighing risks. Part II will outline key executive considerations in revising an end-of-course assessment system to achieve stated benefits.
As with other Executive Briefings where multiple issues of some complexity are involved, I will focus on assertions of potential value to you. For additional detail, send me an email or call.
The Long View
Since 1985, we have developed, validated, deployed, analyzed, and reported findings on more than 12 million end-of-course assessments administered in more than 600,000 courses. For many of these assessments, we have captured, taxonomized, and analyzed the open-ended responses of students and faculty to open-ended questions about instruction, curriculum, other learners, the learning environment, and support services. These data sources have been integrated to answer the common and the not-so-common questions about the merit of end-of-course assessments. As examples, these assessments have been analyzed for whatever secrets they can reveal about effective classrooms and instruction, grades, rate of learning, achievement, mastery, performance metrics, and even career progression.
This 30 year span of work has brought me face-to-face with more practical problems, more challenges to validity and usefulness, and more inferential dead-ends than I could have imagined in my early days, days I now see as naive in retrospect.
Why Have End-of-Course Course Assessments?
The easy answer is that regional, national, and professional accrediting bodies generally require end-of-course assessments. However, these oversight bodies lack the expertise and the political will to set the kind of standards that would require assessments to produce and use valid information. For this reason, and because faculties have mixed feelings about student assessments, end-of-course assessments often suffer from low institutional priority. In the typical setting, instruments are poorly designed (usually by faculty committee with no collective skill in measurement science or decision-support), comments are not gathered or are inappropriately treated, analyses are sophomoric, reports are not timely nor are they individually customized for stakeholders, and there are no requirements that the findings be used in an intelligent way.
If I were limited to offering one reason for implementing a sound end-of-course assessment system it would be that it provides the first and best line of decision-support for effective process management. Contrary to conventional wisdom, processes are the only points at which you can effect outcomes. Managing academic quality by focusing on outcomes is akin to pushing on a rope.
For most of the 20th century, higher education defined quality based on inputs. A university was said to be of high quality because their faculties had the right degrees and their laboratories and libraries had the right stuff. Driven largely by external forces, higher education gradually and in no small measure begrudgingly accepted the idea that outcomes should be given more attention in its definitions of quality. Unfortunately, this gradual shift to outcomes, including competencies, takes us not to the present but only halfway to the modern metrics of quality Were we in any other service industry, suitability to purpose would be our hallmark construct of quality and we would be managing to that suitability via the close management of processes, not outcomes. In a process managed environment, outcomes are measured but are confirmatory rather than surprise factors. Like a turn-by-turn GPS, end-of-course assessments provide critical information that can be used to ensure that students, instructors, curriculum, and support services remain harmoniously on course, working in unison to achieve the desired outcome.
Talk - All of It Bad
End-of-course assessments have received an unjustified bad rap. Among the claims we hear most are that these assessments:
Are nothing more than a popularity contest
Cater to and reward negativism
Over represent isolated, atypical perspectives, not the mainstream
Correlate highly with grades awarded (good evaluations are easily “purchased” by awarding good grades)
Provide little or no valid information to guide instruction, curriculum, or university services
Are not taken seriously by students or instructors
Show little or no correlation with learning
Are invalid because students don’t have the capacity to make informed judgments about their educational experience
The Truth
Are any of these claims true and, if so, to what extent? The answer rests on the specific assessment process in question, including not only its scientific and technical merit but its institutional context, including setting conditions and patterns of use. In practice, many end-of-course assessments are designed and implemented so poorly that they guarantee the truth of some of these criticisms. Poor instrument development can produce skewed findings. Poor administration procedures can reduce student confidence in the process. Poor reporting procedures can stifle the development of skills essential to interpretation and use. A lack of senior administrative support (read: use) can undermine the seriousness of efforts required to implement and manage a good process.
Popularity contest? False. These assessments correlate highly with independent measures of teaching skill, including classroom management and feedback on student performance.
Mostly negative? False. More than 75% of student comments are about the learning environment, 50% about faculty and of the faculty comments, 80% are positive.
Atypical? False. In fact, findings are more likely to be atypical when they suffer from primacy and recency biases such as when they rest on isolated, one-off readings of single course assessments or recollections of a few isolated comments that do not reflect the mainstream judgments (excluded middle bias).
Correlate with grades? False. The largest run we did was on a stratified random set of 85,000 grades and assessment indices. The strongest correlation was 0.26 which itself was better explained in other ways. This particular issue speaks to one of the great unscientific myths in higher education. (We do see a higher correlation with grades in instances of bad teaching.)
Not a good guide? False. Well designed assessments provide detailed guidance on curriculum maintenance, areas in which instruction can be improved, and university services.
Not taken seriously? False. This myth probably arises from instructors' observations that students spend less time on assessments as they progress through their course of study. When we inserted reliability scales in these assessments, we learned that students are simply getting better at the assessments and that this skill is reflected in reduced time-on-task metrics.
No correlation with learning? Doubly false. The correlation is high but the explanations are complex. Let me know if you are interested in this dimension.
Students' are not qualified to judge? False bordering on ridiculous. We hear this one a lot from an small segment of those who teach. Let the incoherence if not outright arrogance of these words speak for themselves.
The Bottom Line
Our experience has demonstrated that even a modest application of necessary expertise and conscientious administration will create significant benefit to students, faculty, content experts, university services, and to the planning function of the institution.