Episode Description
In todayâs episode, weâre revisiting what has arguably been one of the hottest topics over the last few months: artificial intelligence and its impact on the future of education. To date, weâve had professors discuss the pros and cons of generative AI and hosted experts whoâve broken down the tech behind tools like ChatGPT. Today, weâll be discussing different applications of AI in schools and universities, with a focus on strategy, frameworks and how it can transform business education.
To discuss this topic further, we welcomed Dr. David Lefevre, Professor of Practice in the Management and Entrepreneurship Group at Imperial College London. Dr. Lefevre and Dr. Ford chatted about:
- How educational institutes can cut through noise and gain clarity around AI.
- How humans and AI can potentially work alongside each other for optimal outcomes.
- The importance of AI regulation.
- Why itâs crucial for schools to have a âdigital/AI-firstâ approach when building systems if they wish to truly transform learning.
Show Notes
01:13: An introduction to Dr. David Lefevre.
03:11: Why Dr. Lefevre left Imperialâs ed tech lab and moved to AI research.
06:03: Dr. Lefevre explains his current role at the Center for Digital Transformation and how he hopes his work will help educational institutes cut through the noise and get a clear view on AI.
12:31: The impact of AI on society and how humans and AI can potentially work alongside each other for optimal outcomes.
16:31: The challenge with defining and breaking down tasks.
20:25: Dr. Lefevre stresses the importance of educational institutes building a digital/AI-first system, rather than layering generative AI tech on top of existing systems to truly transform learning.
22:24: How MOOCs radically altered education.
26:46: How we can mitigate bias in AI and how the Center for Digital Transformation is evolving their approach based on ethics and bias, if at all.
28:58: Why AI will likely need to be regulated, in the same way that news stations are.
32:26: Dr. Lefevre shares news about Tutello, his AI and human-powered tutoring and support platform.
40:31: Dr. Lefevreâs advice for schools and universities with respect to AI.
Full Transcript
Dr. Cristi Ford (00:00):
Welcome to Teach and Learn, a podcast for curious educators, brought to you by D2L. Iâm your host, Dr. Cristi Ford, VP of academic affairs at D2L. Every two weeks I get candid with some of the sharpest minds in the K-20 space. We break down trending educational topics, discuss teaching strategies, and have frank conversations about the issues plaguing our schools and higher education institutions today. Whether itâs ed tech, personalized learning, virtual classrooms, or diversity and inclusion, weâre going to cover it all. Sharpen your pencils. Class is about to begin.
So welcome back listeners. As we wrap up season one, I wanted to revisit what has arguably been one of the hottest topics over the last few months, artificial intelligence, and its impact on the future of education. On this show, weâve had professors discussing the pros and cons of generative AI and experts breaking down the tech behind the tools like ChatGPT on this show.
And today Iâm excited to be discussing different applications of AI in schools and universities, with a focus on strategy, frameworks, and how it can transform business education.
My guest today is a truly entrepreneurial academic. Listeners, itâs a pleasure for me to introduce to you Dr. David Lefevre.
Dr. Lefevre is a professor of practice in the management and entrepreneurship group where he explores the applications of AI to education. He formed the Imperial College Business Schoolâs Edtech Lab in 2004 to explore the use of digital technology in higher education. The lab now delivers more than 200 online modules each academic year and has been the recipient of numerous awards. In 2022, David stepped down from leading this team and is now focused on AI strategy adoption and infrastructure development. David is actively involved in tech transfer and entrepreneurship. He is also an active investor in EdTech Ventures, and an expert in residence at the Imperial Enterprise Lab. David, thank you so much for joining us here today.
Dr. David Lefevre (02:11):
Itâs a pleasure, Cristi, and thank you for that comprehensive introduction.
Dr. Cristi Ford (02:14):
Absolutely. I will just tell listeners, I have been really excited about having you join this podcast, as Iâve known of you and known and watched your work for over a decade. And I really felt inspired by you, meeting you back in 2010, 2011 on Penn Stateâs campus. And so, this is really a wonderful conversation, wonderful time for me to be able to reconnect with you.
Dr. David Lefevre (02:38):
Thanks Cristi. Iâm flattered. Yeah, look forward to it.
Dr. Cristi Ford (02:42):
So, letâs just jump right in here. We were really excited to have you here. I want to talk a little bit about your current research. But before we do, I think itâs worth highlighting that in the field of AI, that research that came out two weeks ago is already old, and that it moves at such a rapid pace. So to all of our listeners, please know our discussion is accurate at the time of this recording, but we canât predict how fast these shifts will happen.
So, Iâm going to jump in and just get a little background from you, David. So you formed the first ed tech lab at Imperial only to step away from that and move into AI research side of things. Can you give us a little understanding of what the impetus was or rationale behind that choice?
Dr. David Lefevre (03:28):
Yeah. So I mean Iâve been involved in digital education for a very long time. And I started at Imperial College around 2021, 2022. And for those older listeners, that was really kind of the birth of internet-based education. I think that was when you could see clearly that internet-based education was going to be possible. Delivering high quality education through the internet, through the world, through the worldwide web. And really thatâs been a sort of 20 year journey. So that was my sort of career until just after the pandemic. Thatâs what I did, and my job was really encouraging others to do it.
But about 2017, â18, I started working with our maths department at Imperial College. And became very involved in AI. And saw clearly that this was going to be another major technological wave. So around 2018 or so, I decided I wanted to completely commit to AI and follow the next technological wave. Then the pandemic hit, so I had to delay my plans. But what really happened during the pandemic is within the course of a few months, everybody could do the type of innovations that I was doing before, internet-based innovations.
So, for me, running a sort of education innovation team, there werenât many opportunities to progress in purely internet-based education anymore. I can see that very clearly. But luckily for me thereâs this another technological wave coming. So yeah, last summer I stepped down from all my internet-based education roles, and Iâve been focusing squarely on AI. And now of course, thanks to ChatGPT, Iâm in vogue and extremely busy already. Yeah.
Dr. Cristi Ford (05:34):
Fair enough. And I can attest to that. We reached out to have this conversation a couple of months ago, and it really exciting for me. You have been an academic disruptor for such a long time. And so itâs good to hear you share with our listeners your journey. But I want to have you tell us a little bit more about the Center for Digital Transformation. Can you give our listeners a little understanding of how you engage around this work with universities?
Dr. David Lefevre (06:03):
Yeah. Well I think what weâve been trying to do, so my role in this is investigating digital transformation relating to AI and education. So Iâm quite specific. The wider team is engaged in a much larger agenda across different sectors. But my particular focus is on AI and education. And really trying to see through all the noise and get a clear view on AI will be able to disrupt education.
And so itâs a very difficult task because just when we think weâve gained some purchase on the problem, another avenue of exploration emerges. And so we are steadily sort of working through the possibilities, and the pitch is becoming clearer. But this is an enormous topic, and I think your disclaimer is very valid. If you try and make predictions on what youâre doing today, youâre going to be out of date very quickly.
So what weâre trying to do is trying to see through this, we imagine this will be a 20 year life cycle, maybe a bit quicker. Some people are arguing itâll be quicker. But the impact of the worldwide web on our world, that was a sort of 20 year cycle to maturity. My current guess is that it will be the same for AI. It might be much quicker because weâve never seen as much interest in what OpenAI are doing. We havenât seen that level of interest before. So we might see an accelerated hype cycle. But however it happens, itâll pan out over quite a number of years. So weâre trying to see beyond the current noise and take a longer term view.
Dr. Cristi Ford (07:54):
Yeah. As you talk, I just wonder specifically in your guidance, and working with institutions, colleges, and universities, and schools to help them get grips with AI, what are the kinds of things that youâre hearing? How are you engaging with these schools and universities, and how are you trying to prepare them and help them think about this from a strategy perspective?
Dr. David Lefevre (08:21):
Yeah. I mean I think thereâs a lot of astonishment around GPT-4 in particular. I think those who have been following the OpenAI project, I think all of us saw a step change to GPT-4, and in particular around the release of ChatGPT. And I think our university community is currently generally astonished by the capabilities of this technology. Everyone has now jumped into the AI world, I think, and itâs a mind-boggling world of opportunity because you can see applications right across your operations. So, I think at the moment what weâre seeing is just lots of exploration, and the community is trying to figure out the impact of this.
And in addition, extremely practical problems, particularly around assessment. I think AI evangelists are trying to emphasize the benefits for course creation and automation, all these kind of things. But I think thereâs a very practical problem in how are we going to assess students? And thereâs lots of half measures, but weâre going to have to change the way we assess students, or at least design a system that assumes students are going to use these technologies.
So I guess as a community now, I think all the conversations around these large language models, and particularly GPT-4, I think the other models are yet to gain purchase, enormous amount of discussion, enormous amount of perhaps hyperbolic concern, enjoyment at the opportunities, enjoyment in an astonishment in playing with the tool. Lots of interest groups desperately trying to get to grips on assessment because its students are using it right now. But I think, I donât know about you, Cristi, but for me, Iâd say I talk about GPT-4 maybe, I donât know, nine or 10 times a day.
Dr. Cristi Ford (10:24):
Yes. Every conference I go to, itâs topic of conversation for sure.
Dr. David Lefevre (10:29):
Yeah. And I think thatâs what everyoneâs trying to figure it out. But from our perspective, from a strategy perspective, yes, we have to get a grip on the large language models. But thereâs going to be a whole series of AI systems that are going to hit us over the coming years. So this is the first of a number, I imagine. And weâre going to have to form a view on how we engage with this as a community.
Dr. Cristi Ford (10:57):
Yeah. As you talked about your engagements with schools and universities, and just the astonishment and just really building capacity, I think that we have seen from some institutions, originally wanting to just bury their head in the sand, and then realizing, wait a minute, this is going to transform the way that everything works in terms of how we engage with students. And I think what you mentioned about the assessment piece is so critical. And how do we help students to be able to think about the benefits, and how do we help faculty think about this as a superpower, as opposed to something thatâs going to cannibalize the higher education system.
And so, I want to turn a little bit and talk a little bit about your research. I was, in preparation for our conversation to get today, I loved talking and hearing from Heather McGowen, who does a lot of work on the future of work. And she had an interesting perspective in a piece that I just listened to recently where she said that the way that we value talent, it needs to shift. That we are stuck between an augmented reality, augmented era, and an information era. And so, as institutions, as employers, we are still focused on acquiring knowledge and skills, which lead to codifying and transferring skills, opposed to developing learning agility and resiliency to navigate through jobs. And so, I wonder what are your thoughts about this, and would love to hear how your work and thinking about the transformation of different industries and higher education will be impacted.
Dr. David Lefevre (12:31):
Yeah. So, I think thereâs lots of ways to try and investigate the impact of AI on society. Thereâs lots of ways in, there are lots of lenses you can use to view the problem. And one of the ways in which Iâm viewing it and my colleagues are viewing it is that itâs a continuation of the impact of technology on work. So, we have a whole series of technologies going back a couple of hundred years, all of them have automated work or impacted work in some way.
The difference with AI is that for the first time in any significant way, this is impacting professional jobs. So if you are in the legal world at the moment, I mean if you think we talk about GPT-4, talk to your lawyer friends. For a number of reasons, the legal sector is ideally placed to take advantage of GPT. But itâs not just lawyers. I mean, accountants, finance has already been disrupted.
The one thatâs really interesting is coders. So, weâve seen a lot of tech firms at least mentioning AI being part of their reason for restructuring. So, thereâs this whole sort of suite of professional jobs. And of course, being a professor in a university or being a teacher is a professional job. So, my way into the problem is to say, well, professors do two things, research and education. But Iâm focusing on the educational part, and saying, well, how can AI automate or semi-automate the teaching role within a university?
And the way into that is to say, well, what is a job? What is a job? And one way of viewing a job is itâs a series of tasks, some of which can be automated, some of which canât. And so, a big part of our sort of project here is to think about, well, which tasks can be automated, which ones canât? Or which tasks are best done by AI and which task is still best done by a human? And in this way itâs going to take quite a lot of work because we need to start experimenting with these models. But in this way, weâll start to figure out how humans and AI can work alongside each other for optimal outcomes. So thatâs our kind of focus of work, and weâve built some technological platforms to try this out. Weâre beginning to run pilots. But my god, what a controversial topic.
Dr. Cristi Ford (15:12):
Yes. And as you-
Dr. David Lefevre (15:13):
Sorry. Yeah, yeah.
Dr. Cristi Ford (15:13):
Oh, go ahead.
Dr. David Lefevre (15:14):
I have to be very careful my colleagues donât get too much wind of what Iâm doing.
Dr. Cristi Ford (15:19):
Well, and what an interesting perspective. Because I can imagine that even in professional jobs, when you talk about jobs being broken down into tasks, thereâs probably even controversy around which tasks are most essential to be owned by a professor or a law clerk versus those that we can format. It makes me think about a doctor. Comes to work every day to treat patients. They donât want to spend their time doing coding and billing. For me, that seems like an easy distinction. But Iâm wondering, as youâre looking at the research youâre doing, how are you determining consensus? How are you thinking about prioritization, or the critical nature of which buckets the tasks go into in terms of it still being a professional piece that a human does versus AI generated?
Dr. David Lefevre (16:13):
Itâs really hard. I mean, at a very high level, you can define the task. The tasks are helping students learn these knowledge and skills. Then setting assignments, tests of those skills, and then grading those assignments. So at a very top level, the teaching component to a professorâs job, you can identify. Universities are very strange organizations. We give our professors an enormous amount of autonomy on the whole. Of course thereâs exceptions. But generally speaking, in a traditional university, faculty get an enormous amount of autonomy to decide how theyâre going to achieve those high-level tasks.
So, one problem we have is that when we talk to individual faculty, and try and break down what tasks do you, you get a different answer for every faculty. So the traditional way in, in the legal profession or in the accounting profession, you get a pretty good consensus on what the tasks are. But in academia, of course, weâre exceptional as always, and itâs very, very difficult. So weâre now having to approach the problem in a different way. So weâre not saying what do you do already? But weâre trying to say what should be done to achieve the optimum you end up with something slightly different. But thatâs how we break that.
And related to your point, I think the absolute key in our professions is that you want to maintain good jobs. So this is a big area of concern for everybody because itâs the case that technology can often over systemize jobs, and create jobs which arenât stimulating and engaging for the people doing them. But in academia, we still need to attract very good people. And in order to attract very good people, we need to create good jobs. So another part of this whole kind of area is when we have humans and AI working alongside each other, how do we ensure that human job is fulfilling and engaging and is considered to be a good job? And thatâs a key lens on what weâre doing. So a big topic.
Dr. Cristi Ford (18:28):
So, if I can have any influence, as you talk about a good job, what immediately comes to mind is intellectual curiosity, creativity, collaboration, the elements that as a educator really drew me to the field. And so, making sure that we own those components. And how do we develop resiliency as humans to be able to navigate through all and adapt to all of the changes that are going to come forward?
Dr. David Lefevre (18:58):
Yeah, I think thatâs right. Yeah, I mean, itâs so early now, we really donât know how the tasks are going to be allocated and what the job will be. But if you are optimistic about it, itâs those very human tasks that will remain with the humans. How we communicate with each other, motivate each other, trust each other, help each other. Those very human tasks are actually what gives us satisfaction. So with a bit of luck, weâll end up with excellent jobs, which are very human and satisfying. Letâs see. Itâs too early on the journey to say.
Dr. Cristi Ford (19:37):
We can all hope for that, for sure. We can definitely hope for that. I want to move on and talk a little bit more about your systems thinking and approach. I think that one thing thatâs different in chatting with you about your approach from what Iâm seeing from other educational institutions and schools, is that other educational institutions are just taking AI and putting it on top, the top icing layer of the cake to their existing systems and workflows.
But as I talk to you, your approach is a little different. Youâve shared with me that the power of AI comes when you donât do that, right? That you build the digital system first. And so can you share with our listeners a little bit more about how you think about this connection in the systems way?
Dr. David Lefevre (20:25):
Yeah. So I think this is a point that relates to actually any type of digital transformation. So we saw this in the 20 year process of universities adopting internet-based education. The easy step is just to carry on doing exactly what youâve always been doing and layer it on top. So we saw that with the sort of LMSs in the early days. So in the early days, the LMS was a sort of support area for each class. And you put your files in there, rather than giving handouts in class generally. And thereâs a lot more to it than that. But essentially, it didnât require the universities to change what they did in any way really. It was kind of additive.
And it was the same for running simulations. You carried on teaching your class the way you did, and you had online simulations in certain classes, or the use of the flipped classroom where you started replacing textbook readings with say videos. All these things were kind of additive. They didnât require a professor to fundamentally change what they did. You could argue that flipped classroom did, and many people do. But for me it was an incremental shift rather than a fundamental shift.
But if you think about what that does is that it means that the fundamentals of education donât really change. You increase the quality slightly, you increase the cost slightly, but youâre still educating the same number of students. Fees donât fundamentally change very much. You still need the same number of professors. So the big ticket items donât change. So you carry on doing what youâre doing, and youâre just kind of layering technology on top for quality enhancement. And thatâs kind of the easiest thing to do. And thatâs what people are doing now with AI, sort of layering chatbots on top and automated grading and these kind of things.
But I think Iâll credit the MOOCs for this. The MOOCs, when they emerged, what they really did was they were a pure digital education system. Thatâs one way of thinking what they did. People could self-register. That studying was automating, grade was automated, graduate was automating. And once the MOOCs demonstrate that you could radically alter education, you can make arguments around quality, but they really were transformational. They showed that you could educate hundreds of thousands of people for a dollar a time. Thatâs transformational. That changed the model from one professor teaching 100 students at a time to one professor teaching hundreds of thousands.
And the reason thatâs possible is it was designed as sort of a digital education system from the beginning. So the more you kind of digitize your operations, the more you move from a human-centered system with technology on top, towards a technology system of humans on top, the radical benefits increase.
So if you take that into AI, if you design a system designed for AI rather than designed for humans, then you can build these radical systems that can educate many, many people with very, very few professors that are very high quality. So the big benefits of AI, again, come in when you have a digital first system. So one thing weâre doing is looking forward and saying, well, what would that look like if you designed a system for AI rather than just evolving what we do already, then what would that look like? So itâs a very long answer there, Cristi.
Dr. Cristi Ford (24:16):
No, but itâs a great answer. And it makes me think about really creating true digital transformation. And thinking about, not it just being an add-on, but creating an ecosystem that was designed from its original intent to do just what you talked about. So no, I really appreciate that. And I think that Iâm really excited to see where your research will take you.
I have to share with our listeners, when I was chatting with you about this conversation, I said to you, âHow does research and the publication cycle need to change?â This field is growing and evolving so quickly, by the time you get something in for peer review and it comes back, and itâs in review, the technology has already changed. And I just wonder how our colleagues in publishing and how we think about publishing AI specific research, how we will need to shift that.
Dr. David Lefevre (25:11):
Yeah, I think thatâs right. And I think what weâre seeing is a steady acceleration in these technology curves. Itâs not going to stop any time soon, I shouldnât think. But yeah, I think AI in the publishing industry, I mean, I think thatâs going to have quite an impact. So yeah, itâll change things.
Dr. Cristi Ford (25:30):
Yeah. And I think what Iâve noticed in with the Open Education Resource Movement and Creative Commons, we have seen a little bit of that acceleration where individuals have been able to publish articles and books, and that cycle is a little truncated. But I mean, we need to be able to have publication design sprints of three weeks to be able to keep up with the changes and engagement there. So no, I really appreciate your thoughts on that.
Iâd like to move us on and talk a little bit, as I think about all of the wonderful benefits that you share with me of the research of the opportunities for professions and professional industries, it wouldnât be right if we were to have this conversation talking about technology and specifically AI, if we didnât talk about the ethical considerations, the bias that surrounds it. And weâve seen over the years the different types of bias that come with AI. And so as youâre looking at and moving forward in your work, I just wonder what are your suggestions of how we can mitigate that, and how is the Center watching and evolving the approach based on ethical and bias that might be out there, if at all?
Dr. David Lefevre (26:46):
Yeah, I mean this is a topic that thankfully weâre all aware of now. I think if you go back three or four years, when these AI systems were first used, people werenât as conscious of the problems with bias. And some awful things happened. I mean for example, in the US parole system, for example, significant bias. I mean, thankfully I think anyone, certainly in our world, in our university world, I donât think you can get very far in AI before you have to acknowledge the problem of bias and deal with it. So I think the issue has moved on to a degree.
And I think if you take OpenAI, for example, it does do a very good job of removing bias, and also removing any kind of behavior that we consider abhorrent. But the reason it does that itâs being trained. So yes, GTP-4 learns from the web, but then itâs also trained by humans. And OpenAI, I think, put a lot of work into selecting the thousands of humans that had the worldview we consider acceptable. But OpenAI is not going to be the only AI out there. Thereâs going to be a number of large language models. We know that Google have one and others will emerge. And we know that thereâs lots emerging in
China.
But these are often commercial organizations. And although its changing, ethics is often not the number one consideration. If you think of the hierarchy of needs in a corporation, generally speaking, number one is duty to shareholders. And in reality, that means profit.
Dr. Cristi Ford (28:55):
That means profit.
Dr. David Lefevre (28:58):
And so thatâs kind of the danger. It isnât one AI, itâs a whole series of Ais. And these AIs have a worldview, they have a personality. And it could be the equivalent of news stations. We have news stations we like and news stations we find completely abhorrent. And I think theyâll be a range of AI like that. I guess the only solution has to be at the policy level and the government regulation level, I guess. In the same way that news stations are regulated, maybe we need to have the same thing for AI. But thereâs going to be, AI, some jurisdictions outside of the US, outside of other areas. So yeah, itâs going to be a very difficult problem. But I guess the only saving grace is that weâre aware of it now, and weâve seen the awful things that can happen. So hopefully people being conscious of it will taper the effects to a degree.
Dr. Cristi Ford (30:10):
Yeah, I hope so. In the US context, there is a lot of conversation, and I imagine in the European context as well, that having the awareness is fantastic. But what you just said to me that I was keen to hear is when you talked about we all have our favorite news stations. So if I am looking for the same kind of content and media to be able to reinforce my ideals, then I donât get the diversity of thought. I donât maybe get the opportunity to combat fake news. And so itâll be really interesting.
I agree with you that legislatively and governmental levels, weâll have to really have some really tough conversations around how do we not censor academic freedom and freedom of speech, but we also make sure that weâre putting protections in place to keep folks safe and to make sure that weâre not doing harm.
Dr. David Lefevre (31:10):
Yeah, thatâs really interesting. Maybe thatâs the way. As individuals, yes, we generally have freedom of speech, but there are limits to that. And maybe AIs will be regulated in the same way. Maybe thatâs the way forward. Thereâs that whole issue of whoâs responsible for the AI? Is it us as users of these big, large language models that are responsible for the output? Is it the intermediaries, all the ecosystem thatâs coming around it? Or is it the actual OpenAI? Whoâs responsible? So, I think legally we need to identify whoâs responsible. But listening to you speak there, Cristi, I think, yeah, you should be able regulate these AI personalities in the same way we regulate and we have libel laws and all kinds of things.
Dr. Cristi Ford (31:59):
Yep, yep. Well, I will look forward to chat with you about that next year. But I want to shift from talking about your research to really talking about the exciting venture, and make sure I say it correctly, I wanted to talk with you more about your company. Is it Tutello?
Dr. David Lefevre (32:15):
Tutello. Yeah.
Dr. Cristi Ford (32:19):
Tutello. From what I understand, itâs an AI, human real-time tutoring and support platform. Can you share with us how this came about, and give us a little bit more information about where this is going and how youâre engaging this with colleges and institutions?
Dr. David Lefevre (32:36):
Yeah. So I mean, this dates back. So I think what we want to do in technology is solve the big problems. So if youâre going to use technology in education, what weâre really interested is moving the dial. And the internet made education more accessible and available on demand. But from a pedagogy point of view, what it didnât solve was the tutoring issue. So we all know that thanks to Bloom, Benjamin Bloom wrote a paper called the Two Sigma Problem, that people who have personal tutoring perform significantly better than those that donât. In Bloomâs case, it was two standard deviations to the right of the mean there. And I think we all know this intuitively, that if we have one-to-one tutoring, then itâs going to help us learn much more easily.
So this is a known issue, and I think lots of people have been trying to solve this problem for a long time about how to use technology to deliver personalized tutoring. And itâs something weâve been working on for years with different AIs and previous AIs. But I think one thing that the large language models do is make that possible for the first time. So weâve built a system which is a local AI working together with large language models to help students get support on demand. So thatâs the fundamental.
But I think what weâve learned from these automated systems is you always want an escape key as well. Thereâs significant danger to be being trapped within one of these information systems without being able to break into a human support. I think as institutions, thatâs very important.
So the system also builds a network of human tutors. So it connects the OpenAI, local AI, AI support with a network of human tutors. And then students can get help sort of on demand. And we see this as part of the future.
So, weâve launched this product called Tutello. Weâre piloting it in June with have different universities in the UK. The initial results are astonishing, as youâd expect. And itâs really interesting from a research perspective too. Because thereâs this whole idea of humans working alongside AI, the future of work, well this is a real test case. And itâll be really interesting to see how it impacts the human job. Be really interesting to see whether students use the humans. We expect they will, but we donât know how often and in what ways. And so through this real life experiment, weâll start to see which human tasks do students value, when does the AI do a good job? What human tasks do the tutors want to do? And hopefully begin to see an optimal way of humans and tutors working together. So itâs a new commercial venture, but at the same time, itâs a fascinating topic and really exciting one for me.
Dr. Cristi Ford (36:03):
So you took the words out of my mouth, this is fascinating to me. I think about the opportunities here, and think about decades long opportunities where we have had institutions go into tutoring consortiums to try to solve this challenge and work with other tutoring vendors. But one of the things that I heard you say that Iâm going to use, just an IT analogy, itâs almost like working in partnership with the AI components, you have tier one versus tier two support. So immediate support for a student at 1:00 AM who needs to be able to get some additional scaffolding for their learning, versus being able to kick over what goes on to the professor or to the expert in the field. Iâm really, really intrigued by this. And I really appreciate the focus on the learning and the student-centered approach to this. So please keep us informed about what you find in your pilots.
Dr. David Lefevre (37:06):
Yeah, itâs fascinating. And we ran the first pilots a couple of years ago, and students spent most of their time trying to navigate away from the AI and get to the humans. Like we do, when you see a-
Dr. Cristi Ford (37:19):
Right. Like a phone call.
Dr. David Lefevre (37:22):
Yeah. You spend most of your time trying to game it to get through to the human.
Dr. Cristi Ford (37:26):
Pressing zero.
Dr. David Lefevre (37:31):
But in the latest AIs, we have no idea how often students will want the humans. It will be really interesting to find out, but we suspect it will be not that often. But letâs see. Yeah, letâs see.
Dr. Cristi Ford (37:44):
As I listened to you talk about this, I even think about some work I did back in 2015 in using AI to be able to support and scaffold feedback for students. And it was interesting to see even the tone of the comments and the sentiments. The way a person was coached, was it more of a coaching style versus a peer? We even thought about that component. So it will be interesting to hear how you build in elements of compassion or wait time so students can think about the question thatâs being asked, then be able to respond. So yeah, if you ever need someone to just be on your advisory board, this is exciting to me.
Dr. David Lefevre (38:28):
Yeah, we should talk about that. Yeah.
Dr. Cristi Ford (38:30):
This is really exciting. Really.
Dr. David Lefevre (38:37):
Because I do think at the heart of it, the trust part is an enormous part of this. The students need to build a trust with the AI.
Dr. Cristi Ford (38:43):
They do.
Dr. David Lefevre (38:45):
Thatâs built through these factors that youâve just mentioned. And they also need to build a trust with the humans as well. Because thatâs not automatic either. So itâs very hard to suddenly talk to a human that you have no direct connection with. So these trust issues are a really significant part of it.
And another really interesting thing that weâve observed is the way people interact with AI, they trust very quickly. So if people are chatting to an AI, if you look at the way people chat with these AI counselors, they trust very quickly. They go to zero to complete trust within minutes. They drop their guards down and they open themselves up to these AIs very, very quickly. Whereas a human, you have to go to go through the gears and work up trust. Very, very different. So you have to keep these two sides separate as well. For example, you canât take the transcript from a human and an AI and pass it to someone else because thereâs significant privacy concerns there. So this human computer interaction aspect is also a fascinating part of the topic.
Dr. Cristi Ford (39:53):
David, this has been such a wonderful and productive conversation. I know that our listeners will really appreciate how youâve shared the research, shared your entrepreneurial ventures. I guess I want to end with a question for you to our listeners. So as you think about the future of AI, and again, we canât predict, so I will put that caveat there, but as youâre thinking about the work that youâre doing, what advice do you have for schools, institutes, universities, primary schools even, with respect to AI that you would offer to our listeners today?
Dr. David Lefevre (40:31):
Yeah. So I think the best piece of advice perhaps, or best piece of advice I can offer, is that this is going to be a long journey. So I think people are feeling a lot of pressure right now to form a strategy around GTP-4 or to engage with Bard. This is going to be a long journey. This is going to be a 20 year journey. Every few months, thereâs going to be another initiative that you absolutely have to do right now.
But I think the key is to have a clear sense of who you are and what you do and what your mission is, and be careful not to leap on every single trend, and just focus on those tools and technologies that are going to impact the things you really care about.
So I think weâre in the middle of perhaps the biggest technology hype cycle that thereâs ever been. And thereâs pressure on everyone to adopt these technologies right now, but itâs going to be a long journey. And so I think really truly look at them with a cold towel on your head and say, how does this impact the things that I really care about? Because otherwise, youâll get swamped in initiatives and confusion. And to close, Cristi, I appreciate what you say. I think this conversation weâve just had is going to seem so quaint in a couple of months.
Dr. Cristi Ford (41:54):
Yes. Colleagues, you have heard it here first. David, really appreciate the time, the conversation. Iâm leaving this conversation inspired and invigorated, and just want to thank you for taking the time to spend some time with us on Teach and Learn.
Dr. David Lefevre (42:12):
Yes, itâs a pleasure. Please invite me back.
Dr. Cristi Ford (42:15):
Sounds good. Take care.
Youâve been listening to Teach and Learn, a podcast for curious educators. This episode was produced by D2L, a global learning innovation company, helping organizations reshape the future of educational work. To learn more about our solutions for both K-20 and corporate institutions, please visit www.d2l.com. You can also find us on LinkedIn, Twitter, and Instagram. And remember to hit that subscribe button so you can stay up to date with all new episodes. Thanks for joining us. And until next time, schoolâs out.
Speakers
Dr. David Lefevre
Professor Read Dr. David Lefevre's bioDr. David Lefevre
ProfessorDr. David Lefevre is a Professor of Practice in the Management and Entrepreneurship Group where he explores the applications of AI to education.
He formed the Imperial College Business Schoolâs Edtech Lab in 2004 to explore the use of digital technology in Higher Education. The Lab now delivers more than 200 online modules each academic year and has been the recipient of a number of awards. In 2022, David stepped down from leading this team and is now focused on AI strategy, adoption and infrastructure development.
David is actively involved in tech-transfer and entrepreneurship. He is also an active investor in Edtech ventures and an âExpert in Residenceâ at the Imperial Enterprise Lab.
Dr. Cristi Ford
Vice President of Academic Affairs, D2L Read Dr. Cristi Ford's bioDr. Cristi Ford
Vice President of Academic Affairs, D2LDr. Cristi Ford serves as the Vice President of Academic Affairs at D2L. She brings more than 20 years of cumulative experience in higher education, secondary education, project management, program evaluation, training and student services to her role. Dr. Ford holds a PhD in Educational Leadership from the University of Missouri-Columbia and undergraduate and graduate degrees in the field of Psychology from Hampton University and University of Baltimore, respectively.
Stay up to date with our Teaching and Learning Studio Newsletter
Get updates on the latest podcast episodes, free masterclasses, articles and more.