Assessment drives the curriculum. When I first heard it framed like this, I was thoroughly confused. Did this mean my midterm, final exam, and my quizzes marked the direction my course would take—that the most important driver in my course was the tests? That couldn’t be right, could it? Well, the short answer is: Yes. It took me a few weeks to unpack this idea, and when I did, I was delightfully surprised at what I discovered.
While in my mind my exams were a small percentage of students’ overall learning experiences and grades, they were an essential part of their learning process. After all, my final exam was the culmination of all my students’ studies, readings, activities, discussions, and homework. All of the cumulative work during the term led up to this big assessment that would, in turn, be reflected in their exam performance.
What I was slow to discover was that assessment can actually drive the process of learning and motivation in a positive direction. The way it does this best is through regular and constructive feedback that both marks progress and identifies possible improvements. Students clearly need information about whether they are learning intended subject matter and skills. Instructors need to know whether their pedagogical approaches are helping each student and helping their entire classes progress.
What instructors can learn through students’ answers on an assessment can be used to identify and narrow the gap between current and desired levels of student knowledge and performance. This assessment data can lead to diagnostic information about a student’s misunderstandings and, when grounded in a well-defined learning model, guide an instructor’s decisions around how to adjust instruction while simultaneously guiding a student’s decisions around how to revise their work and adjust their learning processes. So, it is in this sense that assessment really drives the curriculum.
It’s also important to note that not all types of assessments and learning feedback are equally effective. For example, for assessment feedback to be effective, it should:
- Focus on the tasks and learning targets in a detailed and narrative manner, not simply be evaluative and graded;
- Be delivered in a way that is supportive and aligned with the student’s progress;
- Be delivered at a time when a student can benefit from it; and
- Be delivered to a receptive learner who has the self-efficacy needed to respond (Andrade, 2016).
Recent studies are contributing to a more nuanced understanding of the features of effective assessment feedback. In a webinar with Dr. Erin Crisp, associate vice president of innovation at Indiana Wesleyan University, we explored the topic of assessment feedback and data-informed decision-making further, so as to better understand the features of effective feedback.
In her research, Crisp found that when learners receive assessment feedback that is content related (not “good job” or “review comma rules”) and when it is clear that the feedback was provided for them as an individual and not copy/pasted from a comment bank or autogenerated by ed tech, they learn significantly more. They are also more satisfied, as measured by their end-of-course survey responses.
Crisp notes that “the primary barriers to overcome related to feedback are that it is time-consuming to provide individualized and content-related feedback, and instructors are de-incentivized every time they encounter students who do not read or use the feedback in future attempts.”
Crisp also likens the design of a course to the design of a building; how a building is designed can clearly influence or change a person’s behavior. Similarly, how assessments and feedback are designed can also greatly impact a student’s way of thinking and behaving: “The more accurately and effectively we can gain insight into the way students are experiencing feedback in a course, the more we can work to improve their experiences.”
As it stands in higher education today, many courses are reviewed only once every four to five years. That’s potentially a lot of underserved learners. Appropriate learning analytics served up in an accessible dashboard could help learning professionals and administrators intervene on behalf of students.
Reporting on this level of quality in online learning is often a long-delayed process. Accreditor visits are few and far between, and academic departments often lack business intelligence tools and analysts that are common in other specialized industries today. Crisp notes, “Many stakeholders want data around quality, but this is a real area for growth from what I’ve seen. What if we agreed that, at least to some extent, feedback is so essential to learning that gathering feedback experience data could provide that first-look ‘temperature check’ function that could flag stakeholders regarding the potential for a concern?”
Looping back to our initial premise—that assessment drives the curriculum—it is clear that regular and well-designed assessments that provide content-specific and individualized feedback are essential for effective student learning experiences. Assessment and course design affect the extent to which students receive, use, and can apply instructor feedback. And by capturing and leveraging data around the multiple dimensions of feedback, instructors can intervene to improve student learning experiences more efficiently and effectively.
Finally, I might amend our initial premise to suggest that while assessment drives the curriculum, quality feedback is the engine that powers learning forward.
To learn more about the importance of assessment and feedback in course design, please check out our webinar recording with Dr. Crisp. You can also read more about her research in this Educause article.
Andrade, H. (2016, unpublished). Classroom assessment and learning: A selective review of theory and research. Paper commissioned by the Committee on the Science of Practice and Learning, National Academy of Sciences, Engineering, and Medicine, Washington, D.C