Over 1,200 educators and leaders joined our recent webinar, Practical AI for Higher Education: Designing, Assessing and Innovating, which featured Dr. Luke Hobson. The energy was electric from the start. Questions streamed in, reactions shifted in real time and the chat revealed a community trying to understand not only what AI can do, but how fast these capabilities are reshaping everyday teaching practices.
What stood out most was how many attendees arrived wanting clarity rather than hype. They were looking for grounded examples, honest takes on risks and opportunities and practical steps that could work in large online environments where scale is always a factor.
Hobson captured the mood early on. He described the moment as one where educators are balancing excitement with caution and shared “I am a realist when it comes to AI. I see innovation and creativity, but I also see drawbacks and concerns.”
That spirit carried through the entire session. People wanted to experiment, but with intention. They wanted to understand what is possible, but also wanted to know where to place guardrails. The moments below reflect the themes that resonated most.
A Sense of Optimism and Realism in the Room
The combination of curiosity and uncertainty shaped the first part of the webinar. Attendees expressed eagerness to learn, along with a desire for workflows that feel sustainable. This was not a group chasing shiny tools. Instead, they were trying to understand how to navigate a landscape where capabilities change weekly and where many institutions are still refining their AI policies.
Hobson spoke candidly about how he approaches this environment. He said that educators should not treat AI as a threat or a shortcut, but as a set of tools that require careful design choices. The chat echoed this sentiment, with participants acknowledging both excitement about what AI can streamline and anxiety about what it might overreach.
Some of the themes that kept surfacing included:
- how AI can free time for higher order teaching work
- how quickly new tools introduce challenges that older assessments cannot handle
- how important transparency and communication are for students who are also trying to find their place in an AI supported world
This opening set the tone for a conversation rooted in both possibility and responsibility.
The Urgency Around Rethinking Assessment
Assessment design became one of the most discussed topics in the session. Hobson explained that this has become the number one request he receives from faculty who are facing questions about authenticity, originality and scale. He shared that people aren’t always looking for completely new assessment formats. Rather, they’re wondering what they can do to make assessments more resilient in online environments where students have access to powerful AI tools.
The conversation focused on the idea that the best protection is not detection. Hobson noted that AI detection tools create false positives and have already led to situations where students faced consequences for work they did complete. Instead, the goal is to build assessments that require visible thinking and unique student input.
Approaches that resonated included:
- short check‑points that break work into smaller visible steps
- evidence of decision making through drafts and rationale notes
- teach back recordings that demonstrate understanding in a personal way
- opportunities for students to evaluate or improve AI generated work
The discussion connected philosophical concerns with practical actions. Attendees wanted to redesign assessments in ways that are fair, transparent and workable for large online courses.
The Moment Agentic AI Browsers Took Over the Chat
One of the most memorable moments came when Hobson showed how agentic AI browsers like Atlas and Comet work. He described them as tools that act inside a browser and can complete tasks when given permission. During a demonstration inside a learning management system, he asked Atlas to generate a weekly journal entry. The tool created the full response and then asked if he wanted it to be submitted.
“It wrote all of that text and then asked if I wanted it posted,” Hobson said.
The chat reacted immediately. Participants began sharing concerns that students might already be using these tools. Hobson referenced a student who publicly thanked a vendor online for a tool that could finish homework for them. This reinforced why assessment redesign and clear AI guidelines matter more than attempts to catch every instance of AI use after it happens.
This moment had one of the strongest engagement spikes of the session because it showed how quickly AI capabilities have shifted from generating text to completing actions across different systems.
A Creative Jump in Scenario-Based Learning
Scenario-based learning emerged as one of the most energizing themes. Hobson explained how he uses AI-generated content as a starting point rather than an endpoint. He described exercises where students take an AI produced draft and, as he phrased it, “tear them apart piece by piece and make something better.”
Educators in the chat responded strongly to this approach. Many shared that they had wanted to build scenario activities but were held back by time, design needs or the difficulty of creating realistic characters and settings. Tools like Gemini Storybook, Runway and DALL-E give people new entry points into this work. These tools support instructional design by reducing the time needed to build rich, contextual prompts.
Not all attendees were immediately comfortable with AI-generated visuals. Some described them as uncanny or distracting. Others commented that quality has improved a great deal compared to earlier attempts. The chat reflected a community exploring what is possible while also evaluating what feels appropriate for their learners.
A Shared Recognition That AI Is Outpacing Course Development Cycles
Late in the webinar, Hobson shared a moment from a conversation with a professor at MIT who told him they “cannot keep up with the rate and pace at which AI is currently going.” Many attendees could relate to this. AI tools evolve quickly, while course update cycles are often slow, dependent on multiple teams and/or tied to institutional processes.
This challenge is part of why there was so much interest in D2L Lumi. Hobson explained that D2L Lumi works as a course development sidekick and can help instructors build modules, generate aligned assessments and produce scenario ideas from existing content. The key message was that D2L Lumi helps teams move faster while still relying on human expertise for accuracy, judgment and quality control.
The webinar revealed a community that is both practical and future focused. People want to use AI in ways that support learning rather than undermine it. They want assessments that reward thinking, scenarios that feel real and workflows that reduce friction. Above all, they want solutions that help them keep pace with an environment that changes quickly.
Missed the webinar? Catch the on-demand replay.
You can also explore how D2L Lumi supports course design, assessment ideas and more.
Written by: