Dr. Cristi Ford (00:00):
Welcome to Teach and Learn, a podcast for curious educators, brought to you by D2L. I’m your host, Dr. Cristi Ford, VP of academic affairs at D2L. Every two weeks I get candid with some of the sharpest minds in the K-20 space. We break down trending educational topics, discuss teaching strategies, and have frank conversations about the issues plaguing our schools and higher education institutions today. Whether it’s ed tech, personalized learning, virtual classrooms, or diversity and inclusion, we’re going to cover it all. Sharpen your pencils. Class is about to begin.
So welcome back listeners. As we wrap up season one, I wanted to revisit what has arguably been one of the hottest topics over the last few months, artificial intelligence, and its impact on the future of education. On this show, we’ve had professors discussing the pros and cons of generative AI and experts breaking down the tech behind the tools like ChatGPT on this show.
And today I’m excited to be discussing different applications of AI in schools and universities, with a focus on strategy, frameworks, and how it can transform business education.
My guest today is a truly entrepreneurial academic. Listeners, it’s a pleasure for me to introduce to you Dr. David Lefevre.
Dr. Lefevre is a professor of practice in the management and entrepreneurship group where he explores the applications of AI to education. He formed the Imperial College Business School’s Edtech Lab in 2004 to explore the use of digital technology in higher education. The lab now delivers more than 200 online modules each academic year and has been the recipient of numerous awards. In 2022, David stepped down from leading this team and is now focused on AI strategy adoption and infrastructure development. David is actively involved in tech transfer and entrepreneurship. He is also an active investor in EdTech Ventures, and an expert in residence at the Imperial Enterprise Lab. David, thank you so much for joining us here today.
Dr. David Lefevre (02:11):
It’s a pleasure, Cristi, and thank you for that comprehensive introduction.
Dr. Cristi Ford (02:14):
Absolutely. I will just tell listeners, I have been really excited about having you join this podcast, as I’ve known of you and known and watched your work for over a decade. And I really felt inspired by you, meeting you back in 2010, 2011 on Penn State’s campus. And so, this is really a wonderful conversation, wonderful time for me to be able to reconnect with you.
Dr. David Lefevre (02:38):
Thanks Cristi. I’m flattered. Yeah, look forward to it.
Dr. Cristi Ford (02:42):
So, let’s just jump right in here. We were really excited to have you here. I want to talk a little bit about your current research. But before we do, I think it’s worth highlighting that in the field of AI, that research that came out two weeks ago is already old, and that it moves at such a rapid pace. So to all of our listeners, please know our discussion is accurate at the time of this recording, but we can’t predict how fast these shifts will happen.
So, I’m going to jump in and just get a little background from you, David. So you formed the first ed tech lab at Imperial only to step away from that and move into AI research side of things. Can you give us a little understanding of what the impetus was or rationale behind that choice?
Dr. David Lefevre (03:28):
Yeah. So I mean I’ve been involved in digital education for a very long time. And I started at Imperial College around 2021, 2022. And for those older listeners, that was really kind of the birth of internet-based education. I think that was when you could see clearly that internet-based education was going to be possible. Delivering high quality education through the internet, through the world, through the worldwide web. And really that’s been a sort of 20 year journey. So that was my sort of career until just after the pandemic. That’s what I did, and my job was really encouraging others to do it.
But about 2017, ’18, I started working with our maths department at Imperial College. And became very involved in AI. And saw clearly that this was going to be another major technological wave. So around 2018 or so, I decided I wanted to completely commit to AI and follow the next technological wave. Then the pandemic hit, so I had to delay my plans. But what really happened during the pandemic is within the course of a few months, everybody could do the type of innovations that I was doing before, internet-based innovations.
So, for me, running a sort of education innovation team, there weren’t many opportunities to progress in purely internet-based education anymore. I can see that very clearly. But luckily for me there’s this another technological wave coming. So yeah, last summer I stepped down from all my internet-based education roles, and I’ve been focusing squarely on AI. And now of course, thanks to ChatGPT, I’m in vogue and extremely busy already. Yeah.
Dr. Cristi Ford (05:34):
Fair enough. And I can attest to that. We reached out to have this conversation a couple of months ago, and it really exciting for me. You have been an academic disruptor for such a long time. And so it’s good to hear you share with our listeners your journey. But I want to have you tell us a little bit more about the Center for Digital Transformation. Can you give our listeners a little understanding of how you engage around this work with universities?
Dr. David Lefevre (06:03):
Yeah. Well I think what we’ve been trying to do, so my role in this is investigating digital transformation relating to AI and education. So I’m quite specific. The wider team is engaged in a much larger agenda across different sectors. But my particular focus is on AI and education. And really trying to see through all the noise and get a clear view on AI will be able to disrupt education.
And so it’s a very difficult task because just when we think we’ve gained some purchase on the problem, another avenue of exploration emerges. And so we are steadily sort of working through the possibilities, and the pitch is becoming clearer. But this is an enormous topic, and I think your disclaimer is very valid. If you try and make predictions on what you’re doing today, you’re going to be out of date very quickly.
So what we’re trying to do is trying to see through this, we imagine this will be a 20 year life cycle, maybe a bit quicker. Some people are arguing it’ll be quicker. But the impact of the worldwide web on our world, that was a sort of 20 year cycle to maturity. My current guess is that it will be the same for AI. It might be much quicker because we’ve never seen as much interest in what OpenAI are doing. We haven’t seen that level of interest before. So we might see an accelerated hype cycle. But however it happens, it’ll pan out over quite a number of years. So we’re trying to see beyond the current noise and take a longer term view.
Dr. Cristi Ford (07:54):
Yeah. As you talk, I just wonder specifically in your guidance, and working with institutions, colleges, and universities, and schools to help them get grips with AI, what are the kinds of things that you’re hearing? How are you engaging with these schools and universities, and how are you trying to prepare them and help them think about this from a strategy perspective?
Dr. David Lefevre (08:21):
Yeah. I mean I think there’s a lot of astonishment around GPT-4 in particular. I think those who have been following the OpenAI project, I think all of us saw a step change to GPT-4, and in particular around the release of ChatGPT. And I think our university community is currently generally astonished by the capabilities of this technology. Everyone has now jumped into the AI world, I think, and it’s a mind-boggling world of opportunity because you can see applications right across your operations. So, I think at the moment what we’re seeing is just lots of exploration, and the community is trying to figure out the impact of this.
And in addition, extremely practical problems, particularly around assessment. I think AI evangelists are trying to emphasize the benefits for course creation and automation, all these kind of things. But I think there’s a very practical problem in how are we going to assess students? And there’s lots of half measures, but we’re going to have to change the way we assess students, or at least design a system that assumes students are going to use these technologies.
So I guess as a community now, I think all the conversations around these large language models, and particularly GPT-4, I think the other models are yet to gain purchase, enormous amount of discussion, enormous amount of perhaps hyperbolic concern, enjoyment at the opportunities, enjoyment in an astonishment in playing with the tool. Lots of interest groups desperately trying to get to grips on assessment because its students are using it right now. But I think, I don’t know about you, Cristi, but for me, I’d say I talk about GPT-4 maybe, I don’t know, nine or 10 times a day.
Dr. Cristi Ford (10:24):
Yes. Every conference I go to, it’s topic of conversation for sure.
Dr. David Lefevre (10:29):
Yeah. And I think that’s what everyone’s trying to figure it out. But from our perspective, from a strategy perspective, yes, we have to get a grip on the large language models. But there’s going to be a whole series of AI systems that are going to hit us over the coming years. So this is the first of a number, I imagine. And we’re going to have to form a view on how we engage with this as a community.
Dr. Cristi Ford (10:57):
Yeah. As you talked about your engagements with schools and universities, and just the astonishment and just really building capacity, I think that we have seen from some institutions, originally wanting to just bury their head in the sand, and then realizing, wait a minute, this is going to transform the way that everything works in terms of how we engage with students. And I think what you mentioned about the assessment piece is so critical. And how do we help students to be able to think about the benefits, and how do we help faculty think about this as a superpower, as opposed to something that’s going to cannibalize the higher education system.
And so, I want to turn a little bit and talk a little bit about your research. I was, in preparation for our conversation to get today, I loved talking and hearing from Heather McGowen, who does a lot of work on the future of work. And she had an interesting perspective in a piece that I just listened to recently where she said that the way that we value talent, it needs to shift. That we are stuck between an augmented reality, augmented era, and an information era. And so, as institutions, as employers, we are still focused on acquiring knowledge and skills, which lead to codifying and transferring skills, opposed to developing learning agility and resiliency to navigate through jobs. And so, I wonder what are your thoughts about this, and would love to hear how your work and thinking about the transformation of different industries and higher education will be impacted.
Dr. David Lefevre (12:31):
Yeah. So, I think there’s lots of ways to try and investigate the impact of AI on society. There’s lots of ways in, there are lots of lenses you can use to view the problem. And one of the ways in which I’m viewing it and my colleagues are viewing it is that it’s a continuation of the impact of technology on work. So, we have a whole series of technologies going back a couple of hundred years, all of them have automated work or impacted work in some way.
The difference with AI is that for the first time in any significant way, this is impacting professional jobs. So if you are in the legal world at the moment, I mean if you think we talk about GPT-4, talk to your lawyer friends. For a number of reasons, the legal sector is ideally placed to take advantage of GPT. But it’s not just lawyers. I mean, accountants, finance has already been disrupted.
The one that’s really interesting is coders. So, we’ve seen a lot of tech firms at least mentioning AI being part of their reason for restructuring. So, there’s this whole sort of suite of professional jobs. And of course, being a professor in a university or being a teacher is a professional job. So, my way into the problem is to say, well, professors do two things, research and education. But I’m focusing on the educational part, and saying, well, how can AI automate or semi-automate the teaching role within a university?
And the way into that is to say, well, what is a job? What is a job? And one way of viewing a job is it’s a series of tasks, some of which can be automated, some of which can’t. And so, a big part of our sort of project here is to think about, well, which tasks can be automated, which ones can’t? Or which tasks are best done by AI and which task is still best done by a human? And in this way it’s going to take quite a lot of work because we need to start experimenting with these models. But in this way, we’ll start to figure out how humans and AI can work alongside each other for optimal outcomes. So that’s our kind of focus of work, and we’ve built some technological platforms to try this out. We’re beginning to run pilots. But my god, what a controversial topic.
Dr. Cristi Ford (15:12):
Yes. And as you-
Dr. David Lefevre (15:13):
Sorry. Yeah, yeah.
Dr. Cristi Ford (15:13):
Oh, go ahead.
Dr. David Lefevre (15:14):
I have to be very careful my colleagues don’t get too much wind of what I’m doing.
Dr. Cristi Ford (15:19):
Well, and what an interesting perspective. Because I can imagine that even in professional jobs, when you talk about jobs being broken down into tasks, there’s probably even controversy around which tasks are most essential to be owned by a professor or a law clerk versus those that we can format. It makes me think about a doctor. Comes to work every day to treat patients. They don’t want to spend their time doing coding and billing. For me, that seems like an easy distinction. But I’m wondering, as you’re looking at the research you’re doing, how are you determining consensus? How are you thinking about prioritization, or the critical nature of which buckets the tasks go into in terms of it still being a professional piece that a human does versus AI generated?
Dr. David Lefevre (16:13):
It’s really hard. I mean, at a very high level, you can define the task. The tasks are helping students learn these knowledge and skills. Then setting assignments, tests of those skills, and then grading those assignments. So at a very top level, the teaching component to a professor’s job, you can identify. Universities are very strange organizations. We give our professors an enormous amount of autonomy on the whole. Of course there’s exceptions. But generally speaking, in a traditional university, faculty get an enormous amount of autonomy to decide how they’re going to achieve those high-level tasks.
So, one problem we have is that when we talk to individual faculty, and try and break down what tasks do you, you get a different answer for every faculty. So the traditional way in, in the legal profession or in the accounting profession, you get a pretty good consensus on what the tasks are. But in academia, of course, we’re exceptional as always, and it’s very, very difficult. So we’re now having to approach the problem in a different way. So we’re not saying what do you do already? But we’re trying to say what should be done to achieve the optimum you end up with something slightly different. But that’s how we break that.
And related to your point, I think the absolute key in our professions is that you want to maintain good jobs. So this is a big area of concern for everybody because it’s the case that technology can often over systemize jobs, and create jobs which aren’t stimulating and engaging for the people doing them. But in academia, we still need to attract very good people. And in order to attract very good people, we need to create good jobs. So another part of this whole kind of area is when we have humans and AI working alongside each other, how do we ensure that human job is fulfilling and engaging and is considered to be a good job? And that’s a key lens on what we’re doing. So a big topic.
Dr. Cristi Ford (18:28):
So, if I can have any influence, as you talk about a good job, what immediately comes to mind is intellectual curiosity, creativity, collaboration, the elements that as a educator really drew me to the field. And so, making sure that we own those components. And how do we develop resiliency as humans to be able to navigate through all and adapt to all of the changes that are going to come forward?
Dr. David Lefevre (18:58):
Yeah, I think that’s right. Yeah, I mean, it’s so early now, we really don’t know how the tasks are going to be allocated and what the job will be. But if you are optimistic about it, it’s those very human tasks that will remain with the humans. How we communicate with each other, motivate each other, trust each other, help each other. Those very human tasks are actually what gives us satisfaction. So with a bit of luck, we’ll end up with excellent jobs, which are very human and satisfying. Let’s see. It’s too early on the journey to say.
Dr. Cristi Ford (19:37):
We can all hope for that, for sure. We can definitely hope for that. I want to move on and talk a little bit more about your systems thinking and approach. I think that one thing that’s different in chatting with you about your approach from what I’m seeing from other educational institutions and schools, is that other educational institutions are just taking AI and putting it on top, the top icing layer of the cake to their existing systems and workflows.
But as I talk to you, your approach is a little different. You’ve shared with me that the power of AI comes when you don’t do that, right? That you build the digital system first. And so can you share with our listeners a little bit more about how you think about this connection in the systems way?
Dr. David Lefevre (20:25):
Yeah. So I think this is a point that relates to actually any type of digital transformation. So we saw this in the 20 year process of universities adopting internet-based education. The easy step is just to carry on doing exactly what you’ve always been doing and layer it on top. So we saw that with the sort of LMSs in the early days. So in the early days, the LMS was a sort of support area for each class. And you put your files in there, rather than giving handouts in class generally. And there’s a lot more to it than that. But essentially, it didn’t require the universities to change what they did in any way really. It was kind of additive.
And it was the same for running simulations. You carried on teaching your class the way you did, and you had online simulations in certain classes, or the use of the flipped classroom where you started replacing textbook readings with say videos. All these things were kind of additive. They didn’t require a professor to fundamentally change what they did. You could argue that flipped classroom did, and many people do. But for me it was an incremental shift rather than a fundamental shift.
But if you think about what that does is that it means that the fundamentals of education don’t really change. You increase the quality slightly, you increase the cost slightly, but you’re still educating the same number of students. Fees don’t fundamentally change very much. You still need the same number of professors. So the big ticket items don’t change. So you carry on doing what you’re doing, and you’re just kind of layering technology on top for quality enhancement. And that’s kind of the easiest thing to do. And that’s what people are doing now with AI, sort of layering chatbots on top and automated grading and these kind of things.
But I think I’ll credit the MOOCs for this. The MOOCs, when they emerged, what they really did was they were a pure digital education system. That’s one way of thinking what they did. People could self-register. That studying was automating, grade was automated, graduate was automating. And once the MOOCs demonstrate that you could radically alter education, you can make arguments around quality, but they really were transformational. They showed that you could educate hundreds of thousands of people for a dollar a time. That’s transformational. That changed the model from one professor teaching 100 students at a time to one professor teaching hundreds of thousands.
And the reason that’s possible is it was designed as sort of a digital education system from the beginning. So the more you kind of digitize your operations, the more you move from a human-centered system with technology on top, towards a technology system of humans on top, the radical benefits increase.
So if you take that into AI, if you design a system designed for AI rather than designed for humans, then you can build these radical systems that can educate many, many people with very, very few professors that are very high quality. So the big benefits of AI, again, come in when you have a digital first system. So one thing we’re doing is looking forward and saying, well, what would that look like if you designed a system for AI rather than just evolving what we do already, then what would that look like? So it’s a very long answer there, Cristi.
Dr. Cristi Ford (24:16):
No, but it’s a great answer. And it makes me think about really creating true digital transformation. And thinking about, not it just being an add-on, but creating an ecosystem that was designed from its original intent to do just what you talked about. So no, I really appreciate that. And I think that I’m really excited to see where your research will take you.
I have to share with our listeners, when I was chatting with you about this conversation, I said to you, “How does research and the publication cycle need to change?” This field is growing and evolving so quickly, by the time you get something in for peer review and it comes back, and it’s in review, the technology has already changed. And I just wonder how our colleagues in publishing and how we think about publishing AI specific research, how we will need to shift that.
Dr. David Lefevre (25:11):
Yeah, I think that’s right. And I think what we’re seeing is a steady acceleration in these technology curves. It’s not going to stop any time soon, I shouldn’t think. But yeah, I think AI in the publishing industry, I mean, I think that’s going to have quite an impact. So yeah, it’ll change things.
Dr. Cristi Ford (25:30):
Yeah. And I think what I’ve noticed in with the Open Education Resource Movement and Creative Commons, we have seen a little bit of that acceleration where individuals have been able to publish articles and books, and that cycle is a little truncated. But I mean, we need to be able to have publication design sprints of three weeks to be able to keep up with the changes and engagement there. So no, I really appreciate your thoughts on that.
I’d like to move us on and talk a little bit, as I think about all of the wonderful benefits that you share with me of the research of the opportunities for professions and professional industries, it wouldn’t be right if we were to have this conversation talking about technology and specifically AI, if we didn’t talk about the ethical considerations, the bias that surrounds it. And we’ve seen over the years the different types of bias that come with AI. And so as you’re looking at and moving forward in your work, I just wonder what are your suggestions of how we can mitigate that, and how is the Center watching and evolving the approach based on ethical and bias that might be out there, if at all?
Dr. David Lefevre (26:46):
Yeah, I mean this is a topic that thankfully we’re all aware of now. I think if you go back three or four years, when these AI systems were first used, people weren’t as conscious of the problems with bias. And some awful things happened. I mean for example, in the US parole system, for example, significant bias. I mean, thankfully I think anyone, certainly in our world, in our university world, I don’t think you can get very far in AI before you have to acknowledge the problem of bias and deal with it. So I think the issue has moved on to a degree.
And I think if you take OpenAI, for example, it does do a very good job of removing bias, and also removing any kind of behavior that we consider abhorrent. But the reason it does that it’s being trained. So yes, GTP-4 learns from the web, but then it’s also trained by humans. And OpenAI, I think, put a lot of work into selecting the thousands of humans that had the worldview we consider acceptable. But OpenAI is not going to be the only AI out there. There’s going to be a number of large language models. We know that Google have one and others will emerge. And we know that there’s lots emerging in
But these are often commercial organizations. And although its changing, ethics is often not the number one consideration. If you think of the hierarchy of needs in a corporation, generally speaking, number one is duty to shareholders. And in reality, that means profit.
Dr. Cristi Ford (28:55):
That means profit.
Dr. David Lefevre (28:58):
And so that’s kind of the danger. It isn’t one AI, it’s a whole series of Ais. And these AIs have a worldview, they have a personality. And it could be the equivalent of news stations. We have news stations we like and news stations we find completely abhorrent. And I think they’ll be a range of AI like that. I guess the only solution has to be at the policy level and the government regulation level, I guess. In the same way that news stations are regulated, maybe we need to have the same thing for AI. But there’s going to be, AI, some jurisdictions outside of the US, outside of other areas. So yeah, it’s going to be a very difficult problem. But I guess the only saving grace is that we’re aware of it now, and we’ve seen the awful things that can happen. So hopefully people being conscious of it will taper the effects to a degree.
Dr. Cristi Ford (30:10):
Yeah, I hope so. In the US context, there is a lot of conversation, and I imagine in the European context as well, that having the awareness is fantastic. But what you just said to me that I was keen to hear is when you talked about we all have our favorite news stations. So if I am looking for the same kind of content and media to be able to reinforce my ideals, then I don’t get the diversity of thought. I don’t maybe get the opportunity to combat fake news. And so it’ll be really interesting.
I agree with you that legislatively and governmental levels, we’ll have to really have some really tough conversations around how do we not censor academic freedom and freedom of speech, but we also make sure that we’re putting protections in place to keep folks safe and to make sure that we’re not doing harm.
Dr. David Lefevre (31:10):
Yeah, that’s really interesting. Maybe that’s the way. As individuals, yes, we generally have freedom of speech, but there are limits to that. And maybe AIs will be regulated in the same way. Maybe that’s the way forward. There’s that whole issue of who’s responsible for the AI? Is it us as users of these big, large language models that are responsible for the output? Is it the intermediaries, all the ecosystem that’s coming around it? Or is it the actual OpenAI? Who’s responsible? So, I think legally we need to identify who’s responsible. But listening to you speak there, Cristi, I think, yeah, you should be able regulate these AI personalities in the same way we regulate and we have libel laws and all kinds of things.
Dr. Cristi Ford (31:59):
Yep, yep. Well, I will look forward to chat with you about that next year. But I want to shift from talking about your research to really talking about the exciting venture, and make sure I say it correctly, I wanted to talk with you more about your company. Is it Tutello?
Dr. David Lefevre (32:15):
Dr. Cristi Ford (32:19):
Tutello. From what I understand, it’s an AI, human real-time tutoring and support platform. Can you share with us how this came about, and give us a little bit more information about where this is going and how you’re engaging this with colleges and institutions?
Dr. David Lefevre (32:36):
Yeah. So I mean, this dates back. So I think what we want to do in technology is solve the big problems. So if you’re going to use technology in education, what we’re really interested is moving the dial. And the internet made education more accessible and available on demand. But from a pedagogy point of view, what it didn’t solve was the tutoring issue. So we all know that thanks to Bloom, Benjamin Bloom wrote a paper called the Two Sigma Problem, that people who have personal tutoring perform significantly better than those that don’t. In Bloom’s case, it was two standard deviations to the right of the mean there. And I think we all know this intuitively, that if we have one-to-one tutoring, then it’s going to help us learn much more easily.
So this is a known issue, and I think lots of people have been trying to solve this problem for a long time about how to use technology to deliver personalized tutoring. And it’s something we’ve been working on for years with different AIs and previous AIs. But I think one thing that the large language models do is make that possible for the first time. So we’ve built a system which is a local AI working together with large language models to help students get support on demand. So that’s the fundamental.
But I think what we’ve learned from these automated systems is you always want an escape key as well. There’s significant danger to be being trapped within one of these information systems without being able to break into a human support. I think as institutions, that’s very important.
So the system also builds a network of human tutors. So it connects the OpenAI, local AI, AI support with a network of human tutors. And then students can get help sort of on demand. And we see this as part of the future.
So, we’ve launched this product called Tutello. We’re piloting it in June with have different universities in the UK. The initial results are astonishing, as you’d expect. And it’s really interesting from a research perspective too. Because there’s this whole idea of humans working alongside AI, the future of work, well this is a real test case. And it’ll be really interesting to see how it impacts the human job. Be really interesting to see whether students use the humans. We expect they will, but we don’t know how often and in what ways. And so through this real life experiment, we’ll start to see which human tasks do students value, when does the AI do a good job? What human tasks do the tutors want to do? And hopefully begin to see an optimal way of humans and tutors working together. So it’s a new commercial venture, but at the same time, it’s a fascinating topic and really exciting one for me.
Dr. Cristi Ford (36:03):
So you took the words out of my mouth, this is fascinating to me. I think about the opportunities here, and think about decades long opportunities where we have had institutions go into tutoring consortiums to try to solve this challenge and work with other tutoring vendors. But one of the things that I heard you say that I’m going to use, just an IT analogy, it’s almost like working in partnership with the AI components, you have tier one versus tier two support. So immediate support for a student at 1:00 AM who needs to be able to get some additional scaffolding for their learning, versus being able to kick over what goes on to the professor or to the expert in the field. I’m really, really intrigued by this. And I really appreciate the focus on the learning and the student-centered approach to this. So please keep us informed about what you find in your pilots.
Dr. David Lefevre (37:06):
Yeah, it’s fascinating. And we ran the first pilots a couple of years ago, and students spent most of their time trying to navigate away from the AI and get to the humans. Like we do, when you see a-
Dr. Cristi Ford (37:19):
Right. Like a phone call.
Dr. David Lefevre (37:22):
Yeah. You spend most of your time trying to game it to get through to the human.
Dr. Cristi Ford (37:26):
Dr. David Lefevre (37:31):
But in the latest AIs, we have no idea how often students will want the humans. It will be really interesting to find out, but we suspect it will be not that often. But let’s see. Yeah, let’s see.
Dr. Cristi Ford (37:44):
As I listened to you talk about this, I even think about some work I did back in 2015 in using AI to be able to support and scaffold feedback for students. And it was interesting to see even the tone of the comments and the sentiments. The way a person was coached, was it more of a coaching style versus a peer? We even thought about that component. So it will be interesting to hear how you build in elements of compassion or wait time so students can think about the question that’s being asked, then be able to respond. So yeah, if you ever need someone to just be on your advisory board, this is exciting to me.
Dr. David Lefevre (38:28):
Yeah, we should talk about that. Yeah.
Dr. Cristi Ford (38:30):
This is really exciting. Really.
Dr. David Lefevre (38:37):
Because I do think at the heart of it, the trust part is an enormous part of this. The students need to build a trust with the AI.
Dr. Cristi Ford (38:43):
Dr. David Lefevre (38:45):
That’s built through these factors that you’ve just mentioned. And they also need to build a trust with the humans as well. Because that’s not automatic either. So it’s very hard to suddenly talk to a human that you have no direct connection with. So these trust issues are a really significant part of it.
And another really interesting thing that we’ve observed is the way people interact with AI, they trust very quickly. So if people are chatting to an AI, if you look at the way people chat with these AI counselors, they trust very quickly. They go to zero to complete trust within minutes. They drop their guards down and they open themselves up to these AIs very, very quickly. Whereas a human, you have to go to go through the gears and work up trust. Very, very different. So you have to keep these two sides separate as well. For example, you can’t take the transcript from a human and an AI and pass it to someone else because there’s significant privacy concerns there. So this human computer interaction aspect is also a fascinating part of the topic.
Dr. Cristi Ford (39:53):
David, this has been such a wonderful and productive conversation. I know that our listeners will really appreciate how you’ve shared the research, shared your entrepreneurial ventures. I guess I want to end with a question for you to our listeners. So as you think about the future of AI, and again, we can’t predict, so I will put that caveat there, but as you’re thinking about the work that you’re doing, what advice do you have for schools, institutes, universities, primary schools even, with respect to AI that you would offer to our listeners today?
Dr. David Lefevre (40:31):
Yeah. So I think the best piece of advice perhaps, or best piece of advice I can offer, is that this is going to be a long journey. So I think people are feeling a lot of pressure right now to form a strategy around GTP-4 or to engage with Bard. This is going to be a long journey. This is going to be a 20 year journey. Every few months, there’s going to be another initiative that you absolutely have to do right now.
But I think the key is to have a clear sense of who you are and what you do and what your mission is, and be careful not to leap on every single trend, and just focus on those tools and technologies that are going to impact the things you really care about.
So I think we’re in the middle of perhaps the biggest technology hype cycle that there’s ever been. And there’s pressure on everyone to adopt these technologies right now, but it’s going to be a long journey. And so I think really truly look at them with a cold towel on your head and say, how does this impact the things that I really care about? Because otherwise, you’ll get swamped in initiatives and confusion. And to close, Cristi, I appreciate what you say. I think this conversation we’ve just had is going to seem so quaint in a couple of months.
Dr. Cristi Ford (41:54):
Yes. Colleagues, you have heard it here first. David, really appreciate the time, the conversation. I’m leaving this conversation inspired and invigorated, and just want to thank you for taking the time to spend some time with us on Teach and Learn.
Dr. David Lefevre (42:12):
Yes, it’s a pleasure. Please invite me back.
Dr. Cristi Ford (42:15):
Sounds good. Take care.
You’ve been listening to Teach and Learn, a podcast for curious educators. This episode was produced by D2L, a global learning innovation company, helping organizations reshape the future of educational work. To learn more about our solutions for both K-20 and corporate institutions, please visit www.d2l.com. You can also find us on LinkedIn, Twitter, and Instagram. And remember to hit that subscribe button so you can stay up to date with all new episodes. Thanks for joining us. And until next time, school’s out.