As generative artificial intelligence (AI) becomes a regular part of our everyday lives, how have higher education institutions been adapting to its presence and use?
A new report, conducted in partnership with D2L and the Canadian Digital Learning Research Association (CDLRA), digs into some of the details of how faculty, staff and institutions are developing policies, regulations and guidelines around the use of generative AI.
“Generative AI in Canadian Post-Secondary Education: AI Policies, Possibilities, Realities, and Futures” includes 438 responses from admin and faculty and admin across 126 publicly funded Canadian institutions.
In this post, we’ll tackle the high-level findings of the report and commentary from leaders in the U.S. higher ed landscape on how they stack up against what they’re seeing.
Survey Results and Recommendations
The 438 responses to open-ended questions about generative AI collected by the CDLRA show that:
- many institutions are at the early stages of developing guidelines around AI usage—only 13% of surveyed institutions have policies in place around the use of AI and 24% of respondents didn’t know whether their institution had established guidelines
- AI use isn’t standardized and is being used on an individual basis
- faculty and admin’s feelings about AI are mixed, ranging from optimistic to concerned—32% of respondents confirmed this sentiment
- over half of respondents shared their concerns about the future relationship between AI and higher ed, including its ethical use, cost, how it will impact the value of higher education and its biases and limitations
The CDLRA has a handful of recommendations based on the survey, including:
- Publicizing the institutional stance on the use of AI to its faculty and staff. This guidance would be most beneficial if it encouraged faculty and staff to experiment with AI instead of controlling how it’s used.
- Creating institutional plans around how it will support responsible use or nonuse of AI.
- Continuing to talk about the limitations and biases of AI and proactively talking to its designers about its future use.
- Discussing with faculty and staff about using AI ethically and what that means.
- Talking to faculty and staff about what they want the future of education to look like when considering the use of AI.
Guideline Development Is in the Early Stages
The CDLRA found that many Canadian institutions are in the early stages of developing guidelines around the use of AI. Consequently, its use isn’t standardized among faculty or admin but rather is used on an individual basis.
MJ Bishop, vice president for integrative learning design at the University of Maryland Global Campus (UMGC), has heard similar stories in the U.S. “Institutions are cautiously optimistic about the use of AI. Many are ‘watching and waiting’ to better understand the implications of these new tools before jumping in to making large investments in technologies or wide-sweeping policy changes,” she said.
Bishop said UMGC has taken an institutional stance of embracing AI tools and supporting responsible use for faculty, staff and students.
“UMGC has had a very progressive academic integrity policy in place for several years. So far, it’s proving to be a fairly straightforward matter to incorporate this philosophy about AI into guidance for students about how to use the technologies to support rather than undermine their learning from our academic programs.”
Depending on your institution and the policies already in place, making minor updates—like UMGC has done—could be all that’s required. Instead of creating an entirely new policy dedicated to the use of AI, minor tweaks to an item like an academic integrity policy could be all that’s needed.
Faculty and Admin Have Mixed Feelings About Using AI
The study findings also show that faculty’s reception and use of AI varies—from optimism to concern. While some want nothing to do with the tool, others have been using it to create lesson plans or rubrics and encouraging students to use it as an “expert on the side” or as an aid in writing.
Terry Di Paolo, vice-provost of e-learning at Dallas College, said the CDLRA findings resonate with the perspective of many faculty and admin in the U.S.
“It’s a similar mixed bag of views, and I think the conversation has been dominated by AI as a ‘bad’ disrupter for the higher education space,” he said.
“The real question we need to all be contending with is: Are we equipping our graduates for a future where the life course is disrupted by frontier technologies, including AI?
Terry Di Paolo, vice-provost, e-learning, Dallas College
“The narrative around AI in higher education seems to paint it as making unwanted and dangerous incursions, with a focus on academic dishonesty, the need for policy or guidelines, and in some quarters, the risk of AI replacing the faculty member’s role,” continued Di Paolo.
“It’s important to temper our conversation with a reality that is far removed from this imagined sense of AI turning the higher education space into a hotbed of lazy dishonesty and fast-food-esque education.”
Bettyjo Bouchey, vice provost, digital strategy and operations at National Louis University, believes that getting the delineation between AI and human interaction right can help offload mundane tasks from faculty to free up their time to provide more personalized learning.
“AI opens up the possibility of understanding our learners more deeply than we ever could before,” she said. “In doing so, we can plan personalized learning and support structures perfectly designed for each learner and their specific needs. That’s nearly impossible to do today.”
Concerns About Bias, Cost and More
While the research found that many thought a future with AI in education inevitable, there were still areas of concern, including:
- understanding how to use the tools ethically and effectively
- AI being used to support and not replace humans
- costs associated with implementing and maintaining AI
- how AI may devalue higher education
- eliminating bias in the building and use of AI for institutions
Bishop is considering some of these concerns with her own use of AI. “As I’ve engaged in generative AI to support my work, I struggle with many of the same questions that are being raised by the report,” she said. “Particularly with respect to ethical practice and concerns about bias as well as the need for more transparency around where the information is coming from.”
While Bouchey also has concerns about bias, she has ideas on how to reduce them. “I believe bias can be mitigated by supervised machine learning. Thoughtful quality assurance plans can spot-check the answers, solutions and guidance we get from AI.”
How to Prepare Institutions for Working With AI
One of the recommendations based on the study’s findings is for institutional leaders to make their stance, guidance and policies surrounding AI clear to faculty and staff. On top of this, the policies should support an environment of learning and experimentation when engaging with AI.
Bouchey notes that National Louis University is adapting policy language and creating a faculty and executive steering committee focused on harnessing and leveraging AI for better teaching and learning.
“This fall, we’ll be partnering with another institution on a course they’ve designed that will enable our academic leaders to host communities of practice on the responsible and ethical use of AI in our classrooms,” she said. “We see it not only as a way of embracing the future but also as a direct responsibility we have to be training our learners for the future of work that will invariably involve the use of AI.”
Like Bouchey, Bishop agrees that training will play a part in the integration of AI in higher ed, specifically for faculty and staff. “I think we all could benefit from additional professional development on generative AI,” she said. “Particularly with respect to how we might leverage these tools to create more equitable learning opportunities and avoid the potential dystopic futures that AI critics envision.”
The role AI plays in the future of higher education is also important to Di Paolo. He believes institutions need to look beyond the classroom setting when considering the integration of AI into education.
“We need to consider AI’s impact on the wider operations of the college campus and the future world of work for college graduates,” he said.
“The very people college educators are getting to think critically and positively about AI as a societal tool are actually the population most at risk from the impact of AI in their career pathways and trajectories,” continued Di Paolo. “The real question we need to all be contending with is: Are we equipping our graduates for a future where the life course is disrupted by frontier technologies, including AI?”
The Future of AI and Higher Education
While faculty and staff’s opinions on the future use of AI in higher ed vary, Bouchey believes we can lean on AI to help expand how faculty, staff and students learn and grow past historical limitations.
“We can be using AI to innovate higher education in unprecedented ways that our human minds can’t because we tend to come to brainstorming with our set of preconceived notions,” she said. “AI can not only collect and analyze data in nearly infinite ways, but also challenge our thinking and help us see new paths and new ways of being that could disrupt higher education at a much more rapid pace.
“We can be harnessing AI to keep us in a futurist mindset, helping us see it more clearly and rapidly, thereby helping us solve and respond to historical challenges in brand new ways.”
Read the Full CDLRA Report
Check out all the findings from the report to get details on the use of generative AI in Canadian postsecondary education.
Kari is a Content Marketing Manager at D2L who focuses on the world of corporate learning. She enjoys using her research, reporting, writing and multimedia skills to tell impactful stories.
Stay in the know
Educators and training pros get our insights, tips, and best practices delivered monthly