Skip to main content
Request a Demo
topics

AI is reshaping how people and organizations work. We feel that shift every day. Everyone working in education knows that, as a result, how we assess our students also has to change. Most of us agree that what’s needed are assessments which bring in AI in ways that mirror the real world but at the same time reduce the opportunities for students to get AI to do the work for them. So, more AI-enabled, less AI-vulnerable; check—and lots of educators are already on that journey. 

The real challenge is doing this consistently at scale, not on one module or even 10 but across a hundred. That’s something that I’ve been grappling with in my role as the associate dean with responsibility for assessment in the Open University’s (OU) Faculty of Arts and Social Sciences. The OU is the largest university in Europe by student population, with approximately 140,000 directly registered students. The Faculty of Arts and Social Sciences is the largest faculty in the university, with 50,000 students across 16 subject areas supported by around 1,700 staff. Metaphorically speaking, it is an oil tanker; changing direction takes time and commitment from all involved. 

Since the core of this issue is assessment design, our first step has been to undertake an assessment mapping exercise. This has involved looking at every assessment task on every module in every qualification in the faculty, asking:

  • What are the vulnerabilities and potentials of the task?
  • What changes are needed to address them?
  • How much time and work will it take to do that?
  • What priority should the changes be in, given that we can’t change everything all at once?

This sounds straightforward, but when you are dealing with hundreds of assessment points, it becomes a major undertaking. We achieved it by devolving responsibility to the most practicable local point, our qualification leads. These colleagues liaised with module leaders below them and coordinated with school-level directors of teaching above to produce comprehensive reports on their programs. 

This mapping has, as you might expect, revealed a mixed picture. Many assessments are already AI-ready, others less so—particularly given the traditional reliance of arts and social science subjects on the essay. It is those assessments that are less ready that have been prioritized to be updated for the 2026-27 academic year, with module leaders responsible for that work. The mapping has also prompted us to think harder about which “AI use category” each task is assigned. These categories are:

  1. You must not use AI
  2. You may use AI
  3. You must use AI

Our aim is to move the bulk of our assessments to Category 2, which best reflects the heterogeneous ways and varied extents to which AI has infiltrated life beyond study. Category 3 is kept for tasks where the use of a particular tool is core to the learning being tested. Category 1, meanwhile, is used for instances that require students to demonstrate that they can complete the task themselves. These must consequently be designed to be as AI-resistant as possible.

Crucially, however, when we rethink assessment for an AI-changed world, the focus cannot rest on isolated tasks. No single assignment can ever be fully protected from AI misuse and no single task can capture the full range of AI-related skills a subject now demands. What matters is the journey across the whole qualification; a mix of formats, approaches and levels of authenticity that together build capability, confidence and ethical practice. That’s where our qualification leads come back into the picture. By coordinating the design of a diverse suite of assessments, they and their teams are seeking to both prepare students for the realities of AI-enabled work and reduce the opportunities for inappropriate AI use. This approach reinforces learning cumulatively rather than hinging on any one specific point.

Looming large across this work is the specter of academic misconduct. Indeed, it is the growing amount of AI misuse, as well as the need to bring AI into the learning experience in positive ways that energizes many colleagues. When you’re dealing with 50,000 students, the number of referrals for suspected misconduct really mount up. We know that with AI, the detection of misconduct is an arms race that cannot be won. We also know that for students and even in some cases for staff, the line between legitimate and illegitimate use is not at all clear and that a considerable amount of misconduct is therefore inadvertent. The faculty is consequently introducing new teaching which looks at how AI works, what it does and does not do well, the ethics around its use, how to critically assess its outputs and how to use and cite it appropriately in assessed work.

To make any of this happen, staff across the faculty need to know more about AI’s ever-changing capabilities and what they mean for the education landscape. Given our size, that is a lot of people to engage and bring with us. For that reason, we have instituted AI and assessment leads in each of the faculty’s three academic schools. These colleagues are workloaded to maintain scholarly currency on the impact of AI on the subject areas in their school and to provide expert advice and development to staff involved with assessment. That starts with assessment setting but also includes teaching AI skills, supporting students to be successful at our revised assessments and actually marking those redesigned tasks. 

The work described here represents the first steps on a longer road. We recognize that, so long as AI keeps developing, so too will our practice as educators need to change in response to that. Ultimately, the issue of AI and assessment needs to become just another thing we think about as routine, like inclusivity, diversity and employability. Our aspirations as a faculty, therefore, are as follows:

  • Assessment and teaching that brings AI into qualifications in ways that are appropriate to the subject and the wider world
  • Assessment that is less vulnerable to AI, thus creating less misconduct, whether purposeful or inadvertent
  • Students who understand the ethical and practical implications of using AI, especially in relation to academic integrity, as well as what it is and is not good at
  • Teachers who are on board with the impact of AI on assessment and learning and can adapt their teaching and marking accordingly
  • Staff for whom designing and delivering assessment with AI in mind becomes part of their everyday pedagogical practice

Achieving these will not be easy, even at the local level. It is, I would argue, a lot harder at scale. Nevertheless, we are confident we will get there. We cannot slow the pace of AI development, nor would we want to. But we can shape how prepared our students are to take advantage of the opportunities it offers.

Written by: