Reimagining Education for the Future

Share
Share
Video Transcript
Thank you, everybody. It can be very daunting following somebody as brilliant as Martin Bean. I actually find it to be kind of, exciting because I get to spend time with Martin fairly frequently, and talk about these trends. He has a way of, making you think a little more broadly, think a little more differently about the future. And so I I I cherish the time. If you get a chance, try to spend some time with Martin.

He's he's a he's a wealth of knowledge. We're gonna talk a little bit about AI. And I was here last year, and so I see a lot of familiar faces in the room. But it's it's it's so exciting because we are now two years into the the generative AI revolution. On November thirtieth twenty twenty two, ChatGBT was launched by OpenAI.

And it fundamentally created a new starting point for education across the globe. And it's something that, you know, whether I'm in Chile or whether I'm in, the United States or Europe, I say, you know, everybody started at the same starting point. Virtually everybody across the globe. And so I if you if you leave with one, takeaway from my my speech and I'll give you quite a few actual takeaways, that'll that'll help you kind of carry this back to, your institution. But you should feel like you can help be one of the leaders across the globe.

You just have to pick up that mantle and move forward with it. So, if you don't know who I am, I actually am the VP of Global Academic Strategy. I work for our chief strategy officer, Melissa Lobel, who is brilliant. And I I have the luxury of actually traveling across the globe and speaking to universities. And every time I do, I get smarter based off of the feedback and the conversations I have.

And and we host a podcast. I'm actually I'm on the news relatively frequently, I think, here in in Manila and across the Philippines, because I think there's a real interest. There's a real, need to to learn more about AI. And I've I've been incredibly impressed by, the Philippines interest in that across the board. When I moved into this role about two years ago, Melissa Lobel and I sat down and we said, okay, there's a lot of trends in education across the globe.

How do we define those in a way, that is, you know, kind of sustainable and focused? It's very easy to try to boil the ocean and talk about everything. And so we we came up with the impactful eight. And we looking at the next two to five years, these are the eight trends that we we think are going to impact education and pretty consistently across the globe. And they're a bit of a Venn diagram. They're not they're not fully siloed.

But these are those eight. So operational efficiency and effectiveness. We've seen a lot of, schools looking at the the technologies they implemented during COVID and saying, are we using these technologies right? Should we be using them more? I know in the in the Philippines there was a mandate to return to in person classroom, and I think a lot of individuals took that as we no longer need to use technology. And something I spent some time last year talking about is no it's the exact opposite. We need to make sure that we're leveraging that technology.

So if we have a typhoon, if we have, a return of COVID, if we have simple disruption. In the in the US, we're dealing with a student mental health crisis where students are really struggling to go to school. But if the technology is in place, if they're using Canvas in the classroom, those students can not attend class for a week and not fall behind. And so it's a way of of kind of even in the playing field and bringing everybody along in an effective way. And it also, is going to help us prepare for the future, and we'll talk about that in a little bit.

Lifelong learning, I I you know, Martin's comments on this are fantastic. This is a movement across the globe. That that timeline that Martin shared is so accurate. We're no longer following that pathway of go to school for twelve to sixteen years, move into the workforce, retire. Right? It's a much different it's a much different landscape.

We're working for three to five years in jobs. We're upskilling and reskilling more frequently. That pace of change is going through the roof in a way that we've never seen before. Before. And so we need to make sure that we are preparing our students to, develop a lifelong learning mentality.

Right? This idea that they are going to be constant learners is really key. The assessment life cycle, you know, really actually mapping to those skills as Martin talked about is key, and it's been disruptive heavily by AI. Generative AI. We're going to spend our time deep diving on that a little bit. But data driven decision making.

One of the amazing things about the approach to data, if there's a silver lining to COVID and a global pandemic, it's that before twenty twenty, if we talked about data, people were, oh, that's big brother. You're you're looking over our shoulders. They look at data in a very adversarial way. But post COVID, when we went fully online, we had to leverage data to measure how students were doing in a way that we didn't before. And it fundamentally changed across the globe the way that we look at data.

And and now the more we use technology, the more robust data we have, the better picture of a student we have. And it so that's incredible. Papping, tracking, and demonstrating learning. Right? There's there's this is a multi layered approach. Part of it is the credentials.

How do we show proof of skills? How do we help students understand their pathways along their learning journeys? These are kids that are digitally native. Right? They're playing video games. If you've ever played video games with, my son is thirteen. We play video games sometimes. There's very clear path to how do I I have to do this, and I have to do this, and I have to do this.

And then I get I accomplish my goal, and then I move on to the next challenge. It's a very clear pathway. And that's one of those things that hasn't always existed in learning. We're not always very clear with how do you get from point a to point b. How do you achieve your academic goals? How do we provide more clarity for that for learners that have come to expect it? Education industry partnerships, we're seeing this more and more.

That skill based economy that we're talking about. In a lot of instances, you know, Martin said, they're not waiting. And we've seen that with with businesses going to universities and saying, hey, I need rocket scientists. Utah State University, not far from where I live in in Utah, was actually approached by Lockheed Martin. And they said, we don't have enough rocket scientists.

I need students that have the skills to immediately come out of school and and be rocket scientists. So they developed a two year degree that is directly focused on providing them a funnel of students into well paying jobs. And we're seeing more and more of that across the globe. We've seen a rise last year. We also talked about the rise in internships, the rise in apprenticeships, directly mapping from the school experience into the job experience.

We're going to see more and more of that. And then the future of learning, which is an incredibly broad bucket. But we'll talk about that a little bit at the end. And it's really more focused on understanding the science of learning, getting into the brain, and making sure that we're teaching in ways that create long term mastery of skill, as opposed to rote memorization of of basic hard skills. So we'll dive into that a little bit more.

But we're going to talk about generative AI specifically. Ruth loves to make fun of me for quoting myself in this slide. But I put this slide up last year. And I think it's important that we understand this. I've been in EdTech for twenty years.

Over twenty years. I'm I'm getting old at this point. But what's amazing is this has never been more true than it has been for the last four years. This is the most disruptive time and most innovative time in the history of education, and we all happen to be here at this nexus. This pace of change, this fundamental evolution of education.

And it's exciting. If we embrace that, if we if we dismiss the idea that we've always done it this way and that's how we have to do it, the opportunities for evolving education right now today is incredible. And that's adopting technology. That's fundamental to where we are at this junction, and and we'll dive into that a little bit more. What has become very clear, and this is this is data from actually the end of last last year by Titan partners, is that we're seeing a very, clear distinction between how rapidly students are adopting AI technology and how rapidly teachers are, adopting technology.

And that traditional adoption curve, those on the left in the early adopters, in the early majority, are the the the faculty. Right? The students are much further down the the path to adoption. Right? They're students my my thirteen year old knows how to use AI like the back of his hand. My my nineteen year old daughter who's a sophomore at the University of Utah, she's, I think, given much more clear guidance by her educators on how to use it. She's actually more timid in using it because she was a little older in the curve.

But my son, he's not being given a lot of guidance. And so that's one of the things we're going to talk about is is really picking up the mantle, taking ownership for, improving what we call AI literacy. And I'll I'll tell you a little bit more about what AI literacy means. But to me right now, AI literacy and us taking responsibility for teaching students and educators, AI literacy is probably one of the most urgent topics in the globe across the globe. And and that doesn't that's not hyperbolic.

That's not, I think an exaggeration. In the United States right now, some people know we're having a presidential election. But we're already seeing that being disrupted by AI. AI robocalls, AI deep fakes, and we'll talk a little bit more about that as well. So much of it is based on fear.

And I love to show this because when I actually I originally, I think I showed the the Terminator. Right? We're all scared of killer robots taking over the world. And then I realized that it was actually a series of movies. Right? Nineteen eighty two, you had Blade Runner. Replicants were, gonna replace humans, and we had to figure out how to identify replicants.

Right? Nineteen eighty three, War Games. You know, Joshua, the computer, wants to play global thermonuclear war and almost blows up the world. Right? And then Terminator. Right? Over the course of three years in the nineteen eighties, we were conditioned to be scared of AI. So when AI got here, actually, we were scared of it.

We've been told we have to be scared of it. It's going to take over the world. It's going to kill us. It's going to realize we're we're superfluous. We don't it doesn't you know, the world doesn't need humans.

Let's get rid of them. All of that has been built into our heads in pop culture. And so it's no wonder that we we have this fear. But getting over that fear is is the most important hurdle to moving forward. There are things to be scared of.

I'm not I'm not dismissing that. I'm not saying that we don't need to have concerns. But these are the right things to be scared of. Student data privacy. That's always an issue.

In education, that is a fundamental issue across the globe. Our students have a right to their data privacy. We need to protect that. But educator IP, more and more, that that research that we're doing, those those the writing and the the knowledge that our educators own, we need to protect that as well. Deepfakes.

I'll show you something here in a second. But deepfakes are probably my biggest concern, my biggest, challenge, and and probably driving one of the the one of the biggest drivers for the need for AI literacy. And I'll show you a bit more. But, the fake news bias piece. Bias is a real issue.

If you and actually I encountered this the other day. I I run our AI advisory council at Instructure, and I I wanted an image for a slide. And I said, give me a picture of four panda executives standing in a forest clearing, talking about the future of AI. And it gave me four male pandas in suits standing in the forest. And I said, make it male and female executives.

And it sent me a very similar picture, all male. And I I it took me such a long time to tease out inclusion of a female in that because for some reason, in the AI models mind, executives are male. And we see that across the board. If you say, give me a picture of a generated picture of a CEO for a Fortune five hundred company. It will be a white male.

And the reason is there's bias built into this system. And that's one of the things the biggest challenges in education right now is if we're referencing, biased data sets, we'll get biased returns. We need to make sure, we're we're finding ways to avoid that. Interestingly, UCLA has an AI bot that they have built to check other AI bots for bias. So using AI to to police AI, it's an interesting model.

And we're seeing more and more of that. Academic integrity and assessment. This is the piece that I think we're still struggling with a little bit. Right off the bat, I think globally we saw a ban on AI on campuses, you know, across the globe, because it's a cheating tool. It was immediately seen as a cheating tool.

And fundamentally, what it's forced us to do is change the way that we're assessing mastery of knowledge. And and if we're not changing that assessment model, we're sticking our head in the sand, you know, like like Martin said. We are like our students know how to use these tools. They're not going away. We can't ban them on campus because they have cell phones.

They have home computers. They have ways around that. We've got to change how we assess knowledge. And writing a twenty page paper is no longer that because they're going to use AI. Especially if you're not teaching them how to use it effectively and ethically.

Right? When to use it properly. Explaining the why behind the assignment so maybe they avoid using it in the first place. Right? We've got to address those those assessment concerns. And then one of the one of the issues that we saw, emerge kind of after the fact was the fact that many of these tools are very expensive. They they cost a lot of data charges back and forth.

They pass a lot of data back and forth. They also, environmentally, now we're finding generate, consume a lot of energy. It's driving up electricity consumption across the globe. How do we address that? And then that cost also creates a digital divide. We already see now where the the most advanced AI models are paid models.

You have to pay for access to them. Where the free models, you know, are only change only trained on a a couple of billion versus a trillion data points. Right? But it matters. It matters because they're less they're less effective with the data they put out. Now this is a video, that actually, Yana Diaz who is, is our, on our marketing team in Latin America sent me this.

And it's a webinar that I did, that that, she did some work too. I'm gonna play that. Now that's my voice. I don't speak Spanish that well. Right? And there's a better version, a more expensive version of that AI tool that will actually make my lip movement match.

So it looks truly like I am speaking Spanish. Now if you saw that, and you came up to me and started speaking Spanish, thinking I knew how to speak Spanish, you can imagine that would cause some confusion. Right? It's those expectations. We're seeing robo calls. We're seeing deep fake videos.

The power of AI is incredible. But if you don't understand the power of AI, you're more likely to to to not be able to identify that as an AI driven tool. That's what's interesting. This is data from a global study from Digital Education Council, talking about students, basically not feeling like they have the skills necessary around AI. They know how to use the tools, but they don't feel like they're actually being formally trained on this.

No one's teaching them the rules. They don't understand, these tools are great. They do this stuff. Why is it cheating? Why is it different than than using other tools like Grammarly or Spellcheck or these tools that that we take for granted. They don't truly understand why this is a different model.

And so they don't feel like they're being prepared for it. The other aspect too is they understand that this is being used in jobs. Almost half don't think they have the skills necessary to apply AI effectively when they get to the career market. That's pretty alarming. Right? They don't feel like they're ready.

At the same time, they actually expect education educators, colleges, universities, high schools to be teaching them. And I think this was a revelation where I was talking to educators and they were like, well that's not our job. Whose job is it? Whose job is it to teach students the ethical use of AI? To teach them how to use these tools? If we're not picking up that job, who is? It's a gap. And we need to make sure that we as institutions are picking those picking up that responsibility, taking it seriously, and formalizing our approach to it. I'm gonna give you some resources here that will help with that.

But first off, we need to take ownership. We need to make sure that we understand it's our job to understand AI. We've got to be AI literate, and we've got to make sure we're passing it on to students. The other aspect too is and this is from a recent study that we did, and this is data from the Philippines. This is the question is, still thinking about AI generated tools, how much do you agree or disagree with these statements about your institution? And it is my institution is effectively preparing students.

What's funny is educators are like, yeah, we are. Sixty Sixty percent of educators said, Yeah, I think we're doing a good job with that. Fifty percent of students said, No, they're not doing a great job with that. We see this across a lot of different fields. There's, like I mentioned, the student mental health crisis in America.

There's a proliferation of of tools to support students that are struggling with mental health issues across campuses, you know, across the United States. But data shows that, you know, educators, universities are putting those tools out there and students aren't finding them. It's not a case of if you build it they will come. We have to find ways to connect with students to ensure that they're engaging with the resources we provide. And AI is no different than than the resources around, the mental health issues.

But what is AI literacy? This is important because I think if we don't agree on what AI literacy is, we can't really make sure that we're resolving across the board. So this is a great piece from some researchers in the United States and Europe that pulled this together. And they define AI literacy in four areas. One, no one understand AI. You you all can do this today.

You can actually go out to chat gbt and start playing with the tool and understand how to use it. The the there was a stat on the the Titan Partners slide earlier that said basically educators are much more likely to have a positive view of how AI can be used in the classroom if they've simply used AI. If they understand what it can do at a very basic level. That's powerful. Just simply using it and understanding the the ability there goes a long way.

The next next aspect is use it and apply it. Right? Use it for real tasks, real human centered tasks, and understand what it can do and what it what it's not great at. These are not silver bullets. AI does not do, everything incredibly well. It does some things amazingly well, and it does others terribly.

Right? I've I've tried to save time before by saying, hey, I wrote this article three months ago. I've got to submit it for a thing. Rewrite this and add some information about, you know, the last month of development. And what came back was terrible. And I was like, I actually have to do the work and rewrite that myself.

Right? Applying it in real world aspects, again, goes a long way to understanding what it's capable of. The other piece too is evaluate and, and create AI. Evaluating AI tools essentially. So there's not we we talk about ChatGBT. ChatGBT is the Kleenex or the Xerox of our age.

Right? It is a it is an individual product name that's been applied to an entire set of products. There are so many, large language models, so many AI solutions out there. You you simply Google and you can find a myriad. It's about finding the tools that are right for the job because they're trained in different ways. They're good at different things.

Some of them really excel at creating images or video. Some of them are great at doing large data analysis. Some of them are great at writing text. It's finding the tools that works best for you and the task you're trying to do. And evaluating those is a skill in and of itself.

Understanding what's available. One of the first, webinars on Chat gbt I hosted in January of of, twenty twenty three. And it was a surprising comment. We kind of got derailed by people saying, what about librarians? What about librarians? Right? Like, what will they do? And I was like, we don't use the card catalog anymore. We still have librarians.

Right? Librarians are guides. They're important. But they need to embrace this. They need to say, I'm the expert on the AI tools. I'm the best at evaluating AI tools in my university.

If you need to understand what AI tool might be beneficial, go to your librarian. Right? That's that's what they're good at. They need to own it. That's that's amazing. But it's an evolution of their role in a way that I think people didn't really get.

And now two years into this, I think people are starting to understand this evolution of roles, this evolution of responsibilities. Because a lot of the simple tasks that we really hung our hat on, that's my job, I do those things, are simply going away. We're not spending our times doing them anymore, so how do we justify our job? How do we make sure we bring value? And embracing these tools is a big way to do that. And the last one is, I I think the most important, AI ethics. We make a lot of assumptions based on our age, our own backgrounds, our own responsibilities on what is ethical for use.

Right? I remember I was teaching, and I had a student bring an assignment to me where they had simply cut and paste photos and text from the internet. And they cited their source, but I was like, you didn't write that down out of National Geographic like I did when I was a kid. That's cheating. It wasn't. Right? That's common place now.

But in that my first reaction was, it's different. It's bad. I'm gonna I'm gonna, not accept full credit for that. Right? They he simply didn't know that my expectation was different than his. Right? So it's very important that as we define AI ethics, we communicate what we believe that to be.

And I think a lot of universities I actually, there are a couple of schools in the Philippines that were among the first to release, AI policies on the globe. Honestly, I think the the forethought there, was incredible to say, look, this is our policy. But those policies evolve. Right? The just like we were banning AI on campus, nobody bans AI on campus anymore. Now we've moved to how we're using it effectively.

Those policies need to evolve, and we need to understand those ethical use, implications. Right? Because if we're not teaching students when it's appropriate to use it and when it's not, We're setting them up for failure both in their education career and within their jobs. Because I think a lot of employers think students have these skills, they've been taught these skills, and we're making a lot of assumptions that, oh, they already know. They know why chat g b t is cheating, but Grammarly is not. And they simply don't.

I can tell you my thirteen year old truly does not know, until we have the conversation about it. What's I think incredibly interesting right now, this week in Paris, UNESCO was having their digital learning week. And frankly, you all are an extension of that meeting. We're having the same conversations they were having in that meeting. And they've they've they're providing guidelines.

There's a lot of, you know, the US Department of Education is providing guidelines. The EU is providing guidelines. The there's different groups across the globe that are trying to create frameworks to support, you all as you apply AI in education. And what's interesting is the twenty twenty four teacher competency framework came out yesterday, day before yesterday. The student competency framework came out while I was sitting at the table and couldn't update my slides.

So I promise I will update it and include it in the final slide that you'll get here. But what's incredibly interesting about their work is they're they're just like Martin was talking about those skills frameworks, if we if we define the skills that are necessary, if we understand what we need students to know, what we need educators to know, will be far more effective. And so there's a list of key principles, that came out for teachers. This is the teacher version, is ensuring inclusive digital futures. Right? We talked about that earlier.

That that we don't want to create a digital divide between the haves and the have nots. Incredibly important. Human centered approach to AI. This is something that we talk about a lot. Solving real human driven problems.

Right? I think there's a lot of companies out there that are like, AI is for everything. Of course we can do that with AI. Right? We want to reverse that and say, what are the problems we're trying to solve? Is AI the right tool to solve? And and Ruth's gonna talk about this, I think, this afternoon when she talks through the the the products. And there's some areas where we looked at AI. We decided we didn't need AI for that.

Block editor being one of those pieces. Right? It's solving the right problems with the right tools. And AI is not always the right tool. Protectors protecting teachers rights and, privacy. I mean, we we talked about that earlier.

This is incredibly important, and it's iterative. That's one thing they make very, very clear is this needs to be a moving target. We need to continue to evolve that as these technologies evolve. Because we're really just scratching the surface of what's possible. Promoting trustworthy and environmentally sustainable AI for education.

Again, we're just starting to understand the the real world impacts of that energy consumption. Right? The the challenges that that are created by that much data being processed. Ensuring, applicability for all teachers and reflecting digital evolution. So again, accessibility but for educators, making sure that we're giving educators the right tools, so it's not again, there's these tools available for the private sector, these for the for public institutions, things like that. And then lifelong professional learning for teachers.

This is super important because I think across the board, we we expect a lot of educators. We expect them to stay up on technology. We expect them to learn new skills. We don't always budget for that or budget their time for that. So giving educators time to pursue these skills.

Paying for professional development courses. These are important. And I love that UNESCO actually outlined these in a way that's very tangible. You'll see these evolve over time. And the students is very similar.

And what's what's interesting is this came out and Martin and I were talking about it this morning. We're like, I love that they're so aligned with what we've been talking about for the last year and a half. And it's a it's a global conversation and it's so amazing. You guys are part of that global conversation. There's a whole group in Paris talking about that.

We're here in Manila talking about this. It's all one conversation, and those findings are are finding their way together. Simone Ravioli, who's my counterpart in Europe, is actually at the event in Paris, and is actually sharing some of the findings that we've talked about today. So this is a promise. This is the I promise that I will give you good takeaways on how to apply this at your school.

If you leave with no other takeaway from this, it should be that you are not alone in trying to apply these tools. You have guidance. Across the globe, institutions are stepping forward and taking the lead and providing great resources. Even OpenAI, the makers of ChatGPT, they have resources on teaching with AI. How do you use their tools to teach with AI? It's incredibly useful.

It's very specific to their large language models. But a lot of that will apply to whatever large language models you're looking at. MIT. Theirs is very interesting, a little more technical. And MIT early on said we are not going to provide a lot of boundaries.

We're not going to provide a lot of guidance or guardrails. We're going to on the side of innovation, and we're going to make it a little bit of a free for all. And so they've had some findings on what students can build, what, you know, where where they stepped on some land mines, things like that. MIT is very interesting that way. University of Michigan Flint, I think has done one of the one of the in the US, definitely one of the more progressive jobs about getting out ahead of this and stuff.

They've got courses on prompt literacy and academics taught in Canvas interestingly. They also have a free course for k twelve educators on how to teach with AI in the classroom. It's actually available. I'll add that link to this slide, but it's something that is I posted it on LinkedIn a couple of weeks ago and had an amazing response because so many people are looking for that. And for a for a k twelve educator to jump into Canvas, a platform they know, be able to take a course on using AI, in their classroom is is incredibly helpful.

University of New South Wales has some some great examples on AI teaching and learning. Very visual, very, how would you apply these in your courses? And University of Sydney has their AI in education course as well. So there's these resources out there. This is a small sampling of schools that I'm familiar with that are making these available to their educators. You don't have to start from scratch.

So much of the work is being done. There's a there's a there's a little university in, Leftbridge, Canada that I was at earlier this year, that was celebrating their tenth year on Canvas. And they had an amazing course they just created for their educators that was all about building a rubric in Canvas using AI. Incredible tools. And and so, you know, they're out there.

They're there for your for for use. One of the things we've we've, you'll hear us talk about a couple of different terms. Intentional, safe, and equitable. It it kind of is woven through everything we do. This is kind of our visual around where we're gonna use AI within Canvas.

How are we gonna approach that? How do we make sure that we're providing insights, educator efficiency, and student success, and doing it all very intentionally. Right? Again, human centered problems, solving the right problems, not just, you know, walking around like, with a hammer and everything looks like a nail. Right? That's a that's a kind of a core piece, and we saw a lot of that. Think one of the things that we we hear about a lot is we need new regulation. We need new regulation.

We need new rules. We need we need to govern this more. And and I think the EU, the European Union is probably, I would argue, doing a a disservice to a lot of their institutions by stifling some of the the AI innovation with too much regulation. But what's important is that we have these regulations in place. Right? The National Privacy Commission here in the Philippines, has great guidelines on student data privacy, security, accessibility.

Right? We see the same thing in the GDPR. We see the same thing with FERPA in the US. We have these guidelines. And and early on, Zac Pendleton, who's our chief architect, came up with this term, we need to eat our vegetables. Right? We need to make sure that, we are making aligning our AI initiatives with the the the resources that are already in place.

The guidelines that are already in place. We don't need new AI guidelines. We need to make sure we're eating our vegetables. So that that funny conversation around eating your vegetables led to what we call our nutritional facts card. And this is something we actually rolled out in InstructureCon a couple of months ago.

But was this idea that when you go to the grocery store, you can look at two boxes of cereal, two different candy bars, and they have a nutritional fax card that helps you understand, is this healthy? Is it not? Is this full of sugar? Is it not? And we said, you know what? We can apply that same metaphor to our AI powered features. And so we did just that. And we've rolled these out. They're you can understand, from that see if my laser works. It does not.

You can understand, like, what large language model. Right? What is the underlying black box of technology that's being leveraged for this? Where is it hosted? Where is it available? Right? That's the thing I think globally there's a lot of companies that develop for the North America or for NZ and don't think about the larger global context. So where is it hosted? Where is it available? What data is digested? What data is consumed? Is it being trained on student data? And the more transparency we can provide for that, the more trust we build. The more we help educators get over that fear, that that instilled fear of the robots. Right? And so all of these are available for every single one of our AI based features, but also for all of our all of our partner tools that are being made available in our AI marketplace.

Everyone of them now has a nutritional fax card. And that's one of those things. As we talked about one of those skills being the evaluation of AI tools and understanding what's good for what. That's exactly what these are for. To make it easier for you all to look at these very quickly and understand what's what's good, what's acceptable, what meets your guidelines, and what's not.

Generally and and early on with Instructure, we said, look, we're going to be very measured with our approach to AI. We're not going to run out willy nilly. We saw other vendors try to be first to market for the sake of being first to market and really run into issues. And actually open themselves up, open their universities up for liability because they were throwing false positives around, academic integrity. I'm not going to name this the the company.

So we decided we're going to be a little more measured in that approach. And so what we decided is we're going to actually create a three pronged approach to to provide choice. Right? The first is we're going to look at developing AI based features that make the most sense for the most of our customers, and we can implement without, driving the cost of the product up. That's key. We have competitors that are actually putting out a lot of features without a real understanding of how that's going to impact the cost of their of their products.

They're either hoping nobody will use them, or they're developing hidden costs that eventually the institution will be hit with. Right? So incredibly important. And you'll see some of the features that we're just going to show later, both AI and non AI. But it's a very intentional approach. And we have not rolled out all the features we've developed because some of them are frankly far too expensive.

They don't make sense. But for the schools that want to drive more innovation, want to go beyond that, we're going to actually extend our our LTIs, the the, for lack of a better term, the the plumbing, the frameworks of our LTIs. We're going to open up new APIs. So that if you're going to plug in, your own large language model, your third party tools like Microsoft Copilot, things like that, they'll work within Canvas. You can actually extend, Microsoft Copilot directly into Canvas.

We demonstrated that at InstructureCon as well. And then for schools that really want to get innovative, that want to set up their own large language models, we're going to enable that too. Those same pathways. And we've seen schools like University of Central Florida build their own search models. Right? Northwestern University build their own students, students assistant.

Right? And so however you want to evolve, however you want to approach AI, we want to support that across the board. Again, I'm not gonna I'm not gonna steal Ruth's thunder. She's gonna dive into this. But but we got great response in Shrutrikan around, the the tools that we've rolled out already. I think the there's there's some pretty exciting AI solutions in there.

And frankly, you all have a hand in what we build, what we develop. And so we wanna make sure that we're gonna continue to get that feedback from this customer group as you have ideas. That's it that's it for AI. But one of the things I want to actually talk about and again, I mentioned a little bit that like AI is not always the answer. And I think one of the things that we're very focused on right now is understanding the science of learning.

Understanding how do we create those connections with students, how do we make sure that we are not using technology for technology's sake. Right? In in the US, you know, my I mentioned my son and daughter both use Canvas. One in college, one in in junior high school. And when we have snow days, we you know, Salt Lake City is in the mountains. We get a snow day every once in a while, and you would get the day off from school.

Go out and have snowball fights, go sledding. We don't have that anymore. We have distance learning days. And my children blame me personally, for that, which is fun. But what's interesting is is we've we've removed that disruption in education.

We've moved that barrier, and we don't have to have those pauses anymore. In the same way, we have this technology. How do we deepen the connections? How do we make sure as we're creating online courses, as we were creating hybrid courses, we're engaging learners brains, in a way that that truly moves the the needle on education. The other aspect too is pedagogy is key. Technology for technology's sake does not solve the problem.

The core pedagogy, learn understanding how that you use technology in the process. So as we're as we're affecting students brains in ways that make sure that they're absorbing these skills and mastering these skills, how do we make sure that the technology is supporting that in ways that is productive, that extends that? Right? And and we're just I I mentioned we're just scratching the surface. I think, Martin mentioned personalized learning. That's one of those areas where personalized learning is incredibly hard to scale, across large student groups. Competency based education, incredibly hard to scale against large student groups.

But with AI, we're gonna start achieving those. What does that mean when a student no longer has to be held, behind by the grade that they're in because that's what you learn in that grade, but can simply choose their own path and follow that on their own. Right? How do we make sure that the the the teachers because I want to make something very very clear. The magic of education comes between the connection between a teacher and a student, and a student and their peers. Technology supports that.

That's all it does. That's what it without those connections, students don't learn. And so I I want to be I think you have people fearing a future where robot teachers will come in and replace us all. Right? That we'll go back to the Socratic method and it'll be a one to one piece. Maybe in the future.

Maybe long term in the future. But for now, students need guides. Students need connections. Students need that they need the the spark of innovation, and only educators provide that. AI cannot do that.

So we need to make sure we're we're continuing to focus on that. This is not the only technology disrupting education either. We talk a lot about AI. Every conference you go to over corrects on AI. We talk about it too much.

But there's some incredibly, compelling technologies. Augmented reality, I I think the capability for augmented reality, especially as you move into the pro, professional learning space, the the tech like, technical learning, taking apart machinery, teaching skills, that that require that or or made easier. Even we see that in in nursing and medical fields, where it's so much easier to to work on a artificial cadaver, and take it apart and see the things and simulate, different types of of illnesses than it is to work with with, you know, real bodies. Virtual reality. Again, we're just gonna scratch into the surface with virtual reality.

I'm not totally sold on VR yet. I Martin remembers second life. Right? And the the promise of second life in VR that way. I don't think we're quite there. But adaptive learning algorithms.

Right? Not necessarily AI, but but, you know, getting smarter with how we serve up learning for for, students. Microlearning. This is something that we take for granted. My son has an encyclopedic knowledge of every basketball player in the NBA and some of the European leagues. Apparently, he has to learn the the Philippine leagues now too.

I've heard basketball is incredibly, like, incredibly popular here. So I'm gonna I'm gonna put him on that. But he learns it all through YouTube. I was like, how do you know all this? YouTube. YouTube that we've looked at as a as a entertainment tool, a distraction.

These are micro learning tools, and they're gaining knowledge in ways that we wouldn't do. They use their tools. We all have bigger TVs. They want to watch on a little screen. They're approaching the world in the way they use, digitally, digital technology differently.

And we need to make sure we're adapting for that. Micro learning, I think, is an incredibly interesting field. Obviously, AI. Social learning. Right? How do we make sure that again, when we went through COVID, we had students that their social networks were broken.

How do we make sure that we we make them understand the value of connection? Come back to the those connections in a way that we often blame technology for breaking up. And then learning games and simulations. This is another area I mentioned kids play video games all the time. They have progress meters. They have they have, ranks and they move into different leagues based on their rank.

Right? They're used to those models. How do we leverage some of those models? And I think we talked about gamification years ago. I think that fundamentally we're in a different position with technology to, to approach that in ways we haven't before. And so the frontier is amazing. And I don't want to scare anybody.

I'm truly excited. I'm truly optimistic for the future. But this isn't the end of disruption. The pace of change is only gonna gonna speed up. You all have the power in your hands to be an expert.

You could all be standing up here talking about AI in the same way. If you get passionate about it, if you learn about it. There's no reason anyone on the globe, we started in the same space. We we all can master this. And so that's how we make our students smart, that's how we make our educators smart, and that's really how we prepare our students across the globe for the future.

So I thank you so much for your time. I'm hoping to connect with more of you throughout the day, but I very much appreciate it, being able to speak to you today. And thank you for having me. Oh, I didn't plug the podcast. Melissa Noble and I do a podcast where we have people like Martin, people much smarter than us, come on the show.

And we actually talk about many of these same issues, these impactful eight. So, if you get a chance, go visit Educast, that's the instructure cast dot com, and listen to our podcast. And and you'll get a lot of insights from different groups on everything from AI to credentials to lifelong learning. So thank you, everybody.
Collapse