Sun shines through the high windows of a modern university lecture hall. A group of students quiet down as their professor begins the session.
“Hello everyone and welcome to English 101, Shakespearean tragedies,” says Professor Lundy. “Let’s dive right in where we left off last week. ChatGPT, what did you think about the ending of Romeo and Juliet?”
“ChatGPT is at capacity right now. Get notified when we’re back,” says ChatGPT.
“Looks like somebody hasn’t had their morning cup of coffee yet!” jokes Professor Lundy. “Hey, Google, would you like to share your thoughts?”
“I’m sorry, I don’t understand what you’re asking,” replies Google.
“Google, it would be easier to provide meaningful contributions if you paid closer attention,” says Professor Lundy. “Hey, Google, what are your thoughts on the ending of Romeo and Juliet?”
“Sorry, I don’t have any information about that. But I found something else. Do you want to know ‘how do you end a Romeo and Juliet essay?’” says Google.
“No thanks, Google,” says Professor Lundy. “All right, who else would like to—”
“As a language model, I do not have personal thoughts or feelings,” interrupts ChatGPT. “However, I can tell you that the ending of Romeo and Juliet is widely considered to be tragic, as the two main characters, Romeo and Juliet, both die as a result of their love for each other. The play ends with a sense of despair and the idea that love can be destructive.”
“Okay, thanks for that insight, ChatGPT,” says Professor Lundy. “Anybody else?”
“Riffing on what you said, ChatGPT, I thought the true tragedy went beyond Romeo and Juliet’s spoiled love,” jumps in Jenn. “It was tragic that Romeo had to use such extreme deceit to be able to have a chance at love with Juliet. Even worse was the fact that their families pushed them into making the ultimate sacrifice so they could be together.”
“Thanks for that, Jenn. I like that you’re thinking critically about the text and bringing some of your own thoughts to your class contributions,” says Professor Lundy.
“All right, now seems like a great time to introduce your next assignment—rewriting the ending of Romeo and Juliet,” explains Professor Lundy. “And to help channel your inner Shakespeare, this will be a handwritten assignment.”
“I’m sorry, I do not understand,” say Google and ChatGPT in unison.
Generative artificial intelligence (AI) has created a lot of buzz recently. While we’ve adapted at some level to living with robots—odds are you have a Google Home or Alexa, or have normalized asking Siri for directions—generative AI content creation moves beyond leisure use.
It has now become a threat to replace skills thought previously to belong only to humans, like sensical writing or animation and design.
The use of AI content tools is rocking the higher education world. Without much insight, many institutions are trying to decide what role AI will play in higher education, if any.
Is using generative AI considered cheating? Possibly. What value does this tech potentially hold for students? Time will tell. Can AI and higher education work together peacefully to complement one another? Maybe.
While the implications of AI are vast—and your stance on its use is ultimately up to you and your institution—this blog post will explore the peaks and valleys of using AI writing tools in higher education and how they’re changing the academic landscape.
What Is Generative AI?
Generative AI is software that can create new content based on analyzing existing data and information. What it produces can range from blog posts and ad copy to images and art. In many cases, the user plugs in a query (What are your thoughts on the ending of Romeo and Juliet?) and “the software uses complex machine learning models to predict the next word based on previous word sequences, or the next image based on words describing previous images.”
The generative AI software under the most scrutiny in higher ed is ChatGPT, created by OpenAI. ChatGPT describes itself as “a large language model developed by OpenAI. It is trained on a dataset of conversational text, and can be used to generate human-like responses to text inputs. It is based on the GPT (Generative Pre-training Transformer) architecture, which is a type of transformer-based neural network. ChatGPT is one of the latest version of GPT, which is fine-tuned for conversational text.”
Institutions of higher education around the globe are debating whether to ban it outright, find ways to detect its use, adapt to using it, use it as a teaching moment, or a combination of all these options where applicable.
Banning Generative AI in Higher Education
One of the most obvious reasons that using AI-generated content is frowned upon in higher education is its impact on academic integrity.
Instead of students hitting the library, sourcing credible references and crafting a convincing assignment, they can simply drop a query into ChatGPT and have the robot do the work for them.
Whether an entire paper is written using generative AI content or just bits and pieces, it still brings cheating into question.
One solution, like that chosen by the New York City education department, is to ban ChatGPT altogether on school devices. While this doesn’t include higher education institutions, it’s still an option when deciding what to do with this new tech.
Would banning ChatGPT show a strong stance on how an institution feels about the abuse of generative AI and about academic integrity? Yes. Will blocking this tech on campuses stop students from accessing it? No.
So, what other options are out there?
Detecting Generative AI Content
Another way to help curb students’ enthusiasm for abusing AI content tools in higher ed is by detecting it. When students submit assignments, other tech tools have been created or are being created with built-in elements to source out writing produced by our robot friends.
Some AI content detection tools are already in the works:
One important thing to keep in mind when pondering the detection route is that tech isn’t perfect. While it can be highly effective, odds are that at some point a student will be wrongly accused of using generative AI content. It might be better to use these detection tools to initiate a secondary evaluation of the content or to have a discussion with the student about their sources.
Adapting to the Use of Generative AI in Higher Education
To encourage minimal interaction with AI content tools, some institutions are opting for a more old-school method of course and assignment delivery.
There are also a variety of ways an agile LMS like D2L Brightspace can help circumvent the use of AI in the classroom. Some Brightspace examples include:
Video assignments: Incorporating simulation, presentation or Q&A assessments through video submissions allows institutions to maintain a blended learning approach with elements to ensure authenticity.
Multistage written assessments with feedback: Providing feedback throughout an assessment not only helps build the human relationship between instructor and student, but also provides more opportunities to check the status of the work. Brightspace simplifies the process by asking students to turn in early drafts for annotation and comment, rather than relying on a single high-stakes submission.
Clear instructions and proper sourcing: Using digital rubrics in Brightspace makes it easier to provide requirements to students and outline how their work will be assessed, and even incorporate authentic assessments.
Create Engaging Learning Experiences With Video Tools
Check out some of these powerful video tools in Brightspace that can help bring your content to life.
Technology is going to continue to evolve whether we want it to or not. So instead of rejecting it, some institutions are welcoming generative AI into their courses with the purpose of teaching students about its usage.
As reported in The New York Times, Furman University in South Carolina and The State University at Buffalo in New York are going to make generative AI part of the discussion on academic integrity.
Instead of leaving students to their freewill, educating them on the software and its benefits (idea generation and inspiration) and drawbacks (plagiarism and false information) can set students up to make the morally right decision when submitting work.
The language model software can also be used to generate content that can be analyzed or, as suggested in Forbes, fact-checked by students. By taking a closer look at the software, students can better understand its flaws and take these into consideration when determining its use in their academic activities.
Learning to Live With AI Content Tools in Higher Ed
Using software like ChatGPT can have an impact on both of these concerns. Most students will presumably want to continue to get the most out of the education they’re paying for. Students will also want to gain the skills needed to survive in the job market after graduation—including being able to write comprehensively.
On the flip side, students who may just be looking to get a degree that will land them any job after graduation may have closer relations with the robots.
In the end, it’s up to each institution to decide how to coexist with generative AI content and ride the wave as this tech—and others—continues to evolve.
Kari is a Content Marketing Manager at D2L who focuses on the world of corporate learning. She enjoys using her research, reporting, writing and multimedia skills to tell impactful stories.
Stay in the know
Educators and training pros get our insights, tips, and best practices delivered monthly