Over 1,200 educators and leaders joined our recent webinar: Practical AI for Higher Education: Designing, Assessing and Innovating, featuring Dr. Luke Hobson. Early in the session, Luke walked through how D2L Lumi supports course development inside D2L Brightspace. As he described it, D2L Lumi works as a course development sidekick that helps instructors build and refine materials more efficiently. His example helped set the stage for a wider conversation about AI in teaching and learning.
We know not everyone has access to D2L Lumi, so the session moved into a broader exploration of other practical tools and strategies educators can use right now. The chat moved quickly, and many questions surfaced about assessment, integrity, agency and workflow support. We gathered the questions that came up most often and collected the answers below.
Missed the webinar? Watch it now.
What are practical assessment design strategies for large, online, asynchronous courses in an AI era?
The request Hobson hears most often from faculty is straightforward. They want help and guidance rethinking assessments now that AI is part of the learning context. As he put it early on in the webinar, many colleagues want to know “what exactly can you do” to make assessments more AI resistant and still meaningful at scale.
The encouraging message he shared was that you don’t need entirely new strategies. What is needed are assessments that surface student thinking, not only the final artifact.
Approaches that work at scale:
- Prioritize evidence of process. Ask for drafts, checkpoints, decision logs, short rationales and moments that make thinking visible.
- Include teach backs. Students record short teach backs where they claim a concept and contribute to a shared library.
- Build peer interaction into the learning evidence. Peer review and critique reduce sameness and help instructors see reasoning.
- Use flexible reflection formats. Written, audio or video reflections can all support deeper articulation of thinking.
- Turn AI into part of the assignment. Students critique or improve AI generated work and explain what changed and why.
Hobson advised against relying on AI detection tools, noting that they’re often unreliable and can create real harm for students. He pointed out that institutions have already seen cases where false positives led to unnecessary academic consequences. His guidance was to design assessments that make student reasoning visible rather than depending on detection after the fact.
A simple rule of thumb: when assessments reward only a polished final product, AI performs very well. When assessments reward the journey, decisions, and context, students must do the learning.
What are agentic AI browsers (Atlas, Comet) and what is their impact on academic integrity?
This question surfaced repeatedly because attendees are already seeing tools that can browse, click and complete tasks inside websites. Hobson explained that an agentic browser feels like opening a regular browser with an assistant inside that can act. In a quick test inside an LMS, he asked Atlas to complete a weekly journal, and it produced the entry, then prompted him to submit it. It wrote all that text and then offered to post it, which shows how quickly the integrity problem appears when tools can take action with the right framing or context.
Students are experimenting with these capabilities as well. One even posted publicly to a vendor to say the tool “does my homework,” a reminder that assessment design choices matter more than trying to catch everything after the fact.
Here’s how to design with this reality in mind:
- Include staged work and checkpoints.
- Assess the reasoning trail, not only the final submission.
- Localize the task so students must draw on personal context or course specific data.
- Build explicit and teachable AI policies into the course.
What should we do with group assignments and scenario-based work when AI is in the mix?
Many attendees asked how to keep group work meaningful when AI can create a first draft instantly. Hobson’s recommendation was to shift the group’s focus from producing a draft to evaluating and improving one. In his courses, he runs sessions where students work with AI generated outputs and then pull them apart piece by piece to make them stronger. “Tear them apart piece by piece and make something better,” as he described it.
Here are some ways to translate that into practice:
- Assign roles that AI cannot replace, such as scenario designer, evidence checker, editor and reflector.
- Use scenario-based prompts that require realistic constraints and judgments.
- Ask groups to show before and after versions and explain the changes.
- Build shared libraries of critiques rather than shared drafts.
Scenario-based learning also translates well into online formats. Tools like Gemini Storybook, DALL-E and Runway help instructors create settings and characters that support applied decision making.
Do custom GPTs or Gems retain prompts and inputs for training, and can we feed them our own data?
This was one of the most important governance questions during the session. Attendees want to leverage custom assistants while protecting institutional and student data. In practice, many tools allow you to customize assistants with role instructions and knowledge sources, so they respond in context. A Gem functions like Gemini’s version of a custom assistant.
Hobson explained that performance improves as you add high quality, relevant information. He also cautioned against uploading sensitive or regulated materials to public tools and suggested relying on institutionally approved or enterprise options when data sensitivity is a concern.
Practical guardrails:
- Do not upload regulated, confidential or identifiable student data into consumer tools.
- Document the sources the assistant can and cannot use.
- Require verification and human review of outputs.
Custom GPTs and Gems are best used for brainstorming, critique and options. Final decisions and high stakes tasks should stay with people.
Do learners need a paid plan to use custom GPTs or NotebookLM?
This question was often framed as an equity concern. Many tools have free tiers, although features change. Hobson encouraged attendees to explore the ecosystem and showed how NotebookLM can compile your materials and produce flashcards, podcasts, slide decks, infographics and conceptual maps. During the session, he confirmed that a featured workflow was available at no cost at the time of the demo. Design for uneven access and avoid placing outcomes behind paywalls.
Here is suggested guidance for equitable design:
- Do not require learners to pay for tools to achieve learning outcomes.
- Provide no cost alternative workflows.
- Check your institution’s licensed tools first.
D2L Lumi and practical AI workflows
Several attendees asked how D2L Lumi compares with general purpose AI tools used in teaching and learning. Hobson demonstrated how D2L Lumi functions as a course development sidekick inside Brightspace. The assessment generator uses Bloom’s taxonomy to produce aligned ideas, and the scenario‑based activity options help instructors focus on learning design. He emphasized that human review remains essential for quality.
The takeaway from the flood of questions is simple. Design for evidence of thinking, for iteration and for context. Keep assistance in support roles and keep judgment in human hands.
Tools and resources mentioned
Want to dive deeper into the workflows Luke highlighted? Explore D2L Lumi today.
Written by: