Skip to main content

AI is Everywhere. 
Except in the Rulebook.

As GenAI becomes commonplace in higher ed, institutions are struggling to define responsible use.

The Findings

According to the Time for Class report, 42% of students, 40% of administrators and 30% of instructors use GenAI tools weekly or daily.
//Scroll //Person //Computer

0%

Students

//person 1 //person 2 //computer //chairs

0%

Administrators

0%

Instructors

Yet only 28% of institutions have formal policies in place to guide its use.
students walking down a hall

0%

Institutions

The Breakdown

“We’re no longer in this experimentation phase. AI is becoming a part of the academic ecosystem,” said Dr. Ford. Without clear institutional strategy, she warns, we risk inequitable access, increased faculty workload and ethical ambiguity. Faculty are often left to navigate these grey areas without the training or support needed to integrate AI meaningfully into their pedagogy.

Catherine Shaw offered a complementary perspective, noting that “just because you have a policy doesn’t mean it shouldn’t evolve or constantly be curated and revised.” Institutions that rushed to ban AI early on are now reconsidering, while others are opting for more flexible approaches that allow departments and instructors to define responsible use in their own contexts.

Justin Rose, associate vice president for information management and digital learning at Southeastern University, agrees that policy is essential—so long as it’s adaptable. He emphasized that guardrails around AI usage are important not only from a product standpoint, but also in terms of sociocultural and psycho-emotive considerations. These dimensions, he shared, are equally crucial when introducing AI-enabled technologies into learning communities.

The Takeaway

Institutions need to shift from reactive bans to proactive, thoughtful strategies that support responsible and equitable AI use. 
That means:

Creating adaptable policies that reflect real classroom use

Supporting faculty with training and guidance on AI integration

//Scroll //Person //Computer

Considering ethical and emotional dimensions alongside technical ones

In Practice

 

How Southeastern University is Taking a Community-Centric 
Approach to Building AI Guardrails

In order to develop a “strong but flexible” policy, Southeastern has taken a cross-functional approach to developing guardrails from the very start.

“Instead of developing AI policy exclusively within the context of academic affairs, we’re doing it with human resources, with information technology and with other various divisions of the institution. It has to be a community-driven endeavor.”

Justin Rose

Associate Vice President for Information Management and Digital Learning, Southeastern University

Justin Rose