Panel Discussion: Ai for Good
Hosted by Martin Bean, CEO, The Bean Centre
Hello, everybody. Lovely to see all of you. And, it's, an absolute pleasure to be back here at, at Canvas Con, in Barcelona. I've had the the great pleasure of doing the world tour this year. So we started in Las Vegas. We then headed to my hometown of Melbourne, Australia.
We're in Manila just a few weeks ago running it here, and now we're here in the wonderful city of of Barcelona. And as, Anne Marie mentioned earlier today and other speakers have referenced, they all get it got to go a few minutes into their presentation before they they talked about artificial intelligence or generative AI. We have the great pleasure, my colleagues and I, of actually spending the entire panel on that topic. And it's it's really about using AI as a force for good. There's absolutely no doubt that if you're dialed in to what I believe is the greatest shift of technology in our lifetimes, certainly as it relates to education, and I turn sixty in just a few weeks' time, so I go all the way back to the first personal computer being applied in education.
There is much to be said that could go wrong, but there's no time to wait. As Anne Marie mentioned this morning, the pace of change driven by generative AI doesn't lend itself to those historic evaluation cycles that we have loved in education, where there would be analysis, there would be pilots, there may be multiple pilots, then there would be debate, then there would be business cases, then there would be approvals, then we there would be implementation, then review, and then it begins again. Not this time. This time to steal, as Anne Marie did, from agile methodology, this time there is only one way forward and that is to have the courage in safe ways, in good ways, to be getting to get started, to find the use cases, to experiment, to learn, to fail, to try again, to apply because there's absolutely no doubt that for our graduates to have the knowledge, skills and attributes that they need, no matter what age they leave our institution, this is going to be a very big part of their lives moving forward. And that's what we're going to talk about today.
This panel has been structured around three wonderful individuals from three wonderful institutions doing some pretty courageous experimental work with the application of our generative AI. They're gonna share their case studies. They're gonna share their their stories. They're gonna share what's worked and what hasn't worked. And we hope that will both inspire and challenge everybody in the room to go back and reflect on what are you doing in the area of AI? Are you moving fast enough? Do you have the right guardrails in place? And are you focused on the right things? So that's the goal.
That's the goal. So we're gonna dive in. I'm going to let Sarah, Leon and Jacqueline sort of introduce themselves to you, as as we move through the the first questions. And, we won't be taking questions from the audience, but we will all be available for the rest of the afternoon if you wanna come and find us over coffee and snacks, to pick our brains or to find out any anything more. So sit back, relax.
You're in for a treat, and away we go. So you ready panel? Alright. Good. So to kick things off, as I mentioned, I want you just to sort of briefly introduce yourselves and maybe, at a headline level to to begin with. Give us a little bit of a sense about how you're ensuring AI is being used as a force of good for your for your institutions.
So, Sarah, do you mind going first? Sure. Thank you. You're welcome. Martin. Well, hello everybody.
First of all, I wanted to thank Instructure for letting me have part on this panel of amazing experts. I'm sure we're all going to enjoy it. My name is Sara Garcia. I'm a graphic designer and, I have recently specialized in the use of artificial intelligence applied to design and creativity. And now I am working, for ISTE which is a digital business school based in Madrid, Barcelona, and Mexico.
And addressing your question, Martin, so how are we applying the artificial intelligence? I need to go back in time a little bit to fifteen years ago, that's when EASTI was founded. And back then, there was a huge technological revolution going on. The eruption of internet, the digitalization of business and jobs and life itself at the end. And so ISTE was born with a mission to help students and to help partners to master this new technology so they will be able to adapt and to survive in this new reality. So that rings a bell, right? That's very similar to what we're looking at right now because artificial intelligence is indeed a technological revolution, so of course, it's needed to step up and help their students and their partners master these skills so they can once again adapt to the new reality.
But since this is, environment of continuous change, what we need first is to learn ourselves. So the main goal right now in our institution is to, first learn ourselves about AI and how it can be, rightfully applied, and then help our staff and our teachers and our students. I don't know if it's something Keep keep going. It might just Okay. Okay.
To to adapt to adapt to this reality that we're living and to see, and that's the most important key, to see AI as not as an end, but as a means. Wonderful. Now you mentioned as part of that, it sort of has to start with us. Sure. We have to understand it first.
What's the one thing you've changed in your life to better understand how to use generative AI? Thank you. I'm double microphone now. You you are. Well, good good redundancy. What what have you done differently to understand it? Well, I have, or I'm trying to integrate the use of AI in my, workflow.
And for that, I don't mean that you should use AI for everything because, in fact, you shouldn't. Please don't. But, when you integrate artificial intelligence as part of your own workflow and you start seeing it as an assistant rather than as a tool, your way of work change. And maybe I can elaborate on that Perfect. Further on.
But Yeah. At the end of the day, the the key is that I am trying to use it not as an extra Sure. But as part of it. Very good. Thank you.
Okay, Leon, your turn. Introduction and a headline. So my name is Leon, von Boekhoofst. I work at Fondis ISD, in the Netherlands. So we have to do with AI from a student perspective, our own perspective, societal perspective because we do our learning so always within context of society.
And Sarah, you are absolutely right, it's not a tool, it's actually something you need to relate to and we need to learn those new skills and that's what I'm doing quite a lot of research on at this point by actually doing continuous improvement and building little proof of concept and seeing what works and what not works. And a lot of stuff works, so we are actually very excited about that. How do you have the courage to stop the stuff that doesn't work? We're not typically very good at that Yeah. In education. How do you stop the things that aren't going so well? Yeah.
You need to also stay ahead of that curve. Right? Within AI, it's even harder to stop because it can always be better or if you can think it will be better. So you need to have a very strong focus of what you wanna wanna get at. Mhmm. And in my case, it's actually empowering students, but also teachers to learn from those tools instead of only using it to make our work easier to make us as professionals better.
And that's a very interesting field of study. Yeah. I think that's a really great answer too. You know, one of my favorite quotes is from Lewis Carroll, if you don't know where you're going, every road will get you there. So intentionality, being clear about what it is you want to achieve and measuring against that actually helps you make that decision.
Do we continue or do we pull back? Thank you. Alright, Jacqueline, your turn. Alright. I'm Jacqueline Gasserbeck from University of St. Gallen in Switzerland.
I'm, as you can see, not an old white bearded person and I'm not even a techie. I'm not a math mastermind at all. I'm a lawyer. So why am I on this panel? Just because I'm a very, very curious person and, I love tech, I love to play around, I love to experiment, and I think that helps a lot when you have to deal with AI. So they gave me this little project like AI in teaching and learning.
But I think you were asking for a headline, so the headline for me would probably be what they are calling me. I'm the AI cheerleader at my university. So that's also something you need to do. You need to get people, students, faculty engaged to talk to each other, to experiment, to find out what's good for them and what's not, when not to use it and when to use it, how to fail and stand up again and try again. So I I would say that's my job.
Oh, that's wonderful, Jacqueline. I've run two very large universities and I always went to look for my cheerleaders, the people that I call the enlightened willing. And they sort of got out ahead and then there were the fence sitters who would watch to see what happened and that's typically how you you move them along. So good for you. And actually, I don't think it's strange at all that you're on this panel as a lawyer because it's actually the legal profession that's been disrupted fundamentally by generative AI.
I'm one of the first to be fundamentally disrupted. So I'm not not surprised at all, and you're very, very welcome to to be here. Alright. Thank you for your headlines and your introductions. We're gonna let you go a little bit deeper now on the projects that you're so proud of.
So can you bring to life for us a project that that you're working on for your institution that you are really proud of and really help us understand why. Why have you chosen this project to talk to all of these wonderful people about today? What is it that gets you really excited about it? And Leon, I'm gonna go to you first this time. Well, I think empowering is very important. We often tend to give our students exactly what they need because we think what they need and I think it's more useful to them to learn how they can build the tools themselves to do whatever they want with the tools. And that's a tool I built, I built a double feedback loop tool where the student actually needs to be, to take ownership of the feedback they get from our cultures.
And the tool actually helps the students to figure out if they really understood what the teacher said. But the other side is also important. So professionalization and lifelong learning also means we as teachers become students again and we need to learn from our work as well. We are experts, of course, but we can always learn. So if I have a feedback conversation with a student, the student has some, advantage of using AI, but me as well.
I get feedback on my feedback. Mhmm. And I have some blind spots, I can tell you that. So there's actually a double loop that's going on there and you can go way further with this idea, right? It's always interaction and learning from that interaction. And you can use Jet AI especially very well-to-do that.
So I'm very proud of that part of experiments we do all the time in civics nowadays. No. Thank you. So help me understand it, Rit. So I'm one of your students.
How will I interact You're welcome. Oh, that would I'd love to be actually. How would I interact with the with I'll call it a tool, but whatever you you call it, how would it be exposed to me? How would I use it? So this we we get a recording of our feedback conversation. Yes. There will be some transcription.
And the tool is actually designed that the student goes to Canvas and we have a tool there from the beautiful people of Dream. It's called, Feedbills. And they write down what they got from the feedback session. The tool tries to figure out that he understands everything, and can give me also a sign. This student, he forgot about this, the past three, feedback sessions, for instance, so I can notice that.
And I get a report of my way of giving feedback and where I should improve maybe or what I did correctly as well over a longer context. So more feedback sessions, getting more rich and rich in-depth relation. And I want I'm just thrilled that you chose feedback because in all my time as an educator, when I've talked to our students about what they value the most and what they would like more of, it's feedback comes up in the top three every every time. Have any of your teachers been offended by the feedback they've got back about their feedback? That's a lot of No. Not at all.
Not at all. It's it's wonderful to see your blind spots. Right? There are blind spots, so you don't see them. And you always think it's a one on one. Right? So the student won't give you that feedback most of the time, but the tool can.
Yeah. And the tool knows the tool knows nothing, but it really knows our educational vision, which is also inside of there. And the meaningful conversation is part of open learning. I think it's wonderful. And by the way, I can speak from experience, not everybody enjoys feedback.
I have three daughters when I give them feedback. It's not always welcomed. Trust me. Okay. Thank you very much, Leon.
Jacqueline, what's the project you're really proud of? Yeah. When we prepared for the talk, I had this one project in mind, but then, yeah, now I have two in mind so I will talk about this. Inspirational. So the one project I'm actually all excited about right now is our law prompt a thon that is going to happen next week. So it's like a hackathon but with prompts.
So we bring together students and faculty and practitioners like lawyers and they have to solve challenges like law challenges, one from the court, one from practice, and it's all meant to be a big happening and they can experiment and talk to each other and find out, find the best model, find the best prompt and then compete against each other. The other one that was initially there is very practical. It goes, it works very well with Canvas. So we did a little workshop with faculty how they could make their own rubrics, Gen AI generated rubrics that they could easily implement into Canvas and that was also very well received by faculty because it was helping them in their daily work. And so that makes it easier for me if I can sell something that is really useful for them.
It's not just prompt engineering. Nobody wants to talk about prompt engineering anymore. So you need a purpose, you need a use case, you need to make it fun in the best place. They're two great, great, examples that you've shared. On the prompting, what I'm finding as well more and more is it's giving the Gen AI the context before you even give the prompt.
Exactly. So one of my favorite techniques now is to ask it to slow down, take its time because this is really important to me and I want its best effort. And according to the literature that I've reviewed, if you do that, you can up the quality by at least fifteen percent of what comes back. And and so I always take the time to give it as much context of what it is that I'm working on. And and one of my favorites now is when I start off, I say, hi, it's Martin.
You remember who I am, don't you? And it comes back with an ever richer summary of me each time I do it. And of course, I'm doing that because I want to anchor it immediately in all that it knows about me. That's that's really that's really super. And I agree with you. Prompt engineering had a shelf life of about five minutes.
There's a lot of disappointed people with their LinkedIn profiles that are updating them right now, but, because it really didn't didn't last, very, very long. So thank you for those, those examples. And your second one, which I think we are all very aware of in this audience, is that there's a common characteristic of teachers K through twelve, primary, secondary, tertiary education everywhere in the world, and that is they are time poor. I don't think I've ever met a teacher that didn't want to do the best for their students they could, but I met most teachers that tell me they'd love to do more, but they just don't have any time. So what you talked about in terms of being able the rubrics and being able to automate that, I think is a wonderful example.
And if you look at most of the early use cases from Instructure with Canvas, you'll notice a lot of them are being designed to help save time of people to unlock that potential into other areas. So thank you. Alright, Sarah. What's your favorite? K. In my case, let me take your question and turn it around a little bit because rather than talking about the project that I have done and that I am proud of, I would like to talk about something that I I am willing to work on and we are currently working on at ISTE, and it's very related to what, Jacqueline and Leon said and also, what you have pointed out right now.
It's about this, saving time and this getting feedback and this concrete things that, we are getting with AI is what I call integrating it in your workflow. So, when I when when the goal is to educate people to know how and when to use AI, and that's our goal, I'm not talking about knowing one hundred AI tools that can be used for this and that and these kind of things, but, to really, working with it in all the in in every aspect of your of your work that can be improved. And when you do that, you receive two wonderful things. The first one is time, as you were saying. Yeah.
You save time, of course, because something that may took you hours to complete, now with a few clicks, you can have it. And the second one is, that you discover an a new way of working. So on one hand, when you are saving time as an educator, as an instruction designer, or whatever your job position is, the real question is, what are you going to do with this time that you're saving? And related to what you were saying before, I think one right answer will be to innovate, to take this time that you're saving and effectively improve all the things that you always wanted to improve about your classroom or your institution or your job, but you couldn't because of bureaucracy and paperwork and this kind of thing. So this is something that you can take this time and invest it in your vocation because this profession is vocational. So this for me is something that maybe was not impossible before but truly harder than than now And that's amazing.
And the other thing is that, for example, let me give you an exal an example. If you're, using AI to do brainstorming, you, are having a conversation first with yourself because as you said, you need to write down Mhmm. The context and you need to structure your thoughts in order to get what you're looking for. And then you are having a conversation with a machine that has access to millions and millions of data that you by yourself couldn't reach. So that makes that your train of thought by iterating and being patient with the machine, takes you to places new new places.
And the combination of these two things, the time that you're saving and these new ideas that, gets to use with the AI, is giving you the opportunity to work on what you do love most about your your job. And a key concept that I would like to highlight because I think it's very, very important is that in my experience and the ones around me, the the experience that I saw, When you work with AI on your own workflow and at your own path, the best ideas are not the ones that were given to you by the AI, but the ones that were inspired by the by the AI and then developed by human experience and human creativity. So that's why I like to say that, AI sparks creativity. But, in fact, it's a human work to take this spark and make the fire. Yeah.
I think a wonderful point. In fact, in some of my speaking I talk about is generative AI sort of drives us all back to mediocrity and sort of takes the laggards and makes them better and takes the supposed experts and makes them not quite as expert as before. What we're actually left with as our advantage is our enduring human capabilities, those things that are uniquely ours that we then get to bring to the forefront. And I think what you've described is a wonderful way of bringing that, bringing that to life. Two of my favourite questions I like to ask are, what haven't I asked you in this conversation that I should have, is one that always comes back with things that get me thinking, in a different way.
Or is there another way that I could have approached this exercise instead of the way that I've approached it? And in both cases, what I typically get is my brain gets triggered to think in different ways. Now the machine hasn't told me necessarily how to be innovative and think differently, but it's sort of, as you said, flipped a switch for me to go, oh, I actually didn't think about it that way. So I think that that's really super. Jacqueline, a topic that I'm sure is on all of our minds because we're an education audience, there's clearly a lot of responsibility in using generative AI in contexts like education and health, and elsewhere. It's like that, great Spider Man quote, you know, with great power comes great responsibility.
What what are you doing to ensure that AI is used ethically and responsibly in your institution? Yeah. What I usually try to explain to students or faculty that for me AI is a little bit like a Swiss army knife. So you can use it as a mom to carve beautiful animal creatures out of carrots or you can fix the bike on the go or you can use it to cut as a seal, like a safety seal or even to hurt someone. So it's not the Swiss army knife that is good or bad, it's what we do with it. So it's the same with AI.
We have to learn how to responsibly use it just like we learn small children to use a knife. I mean there is nothing different and we have all these dark and light sides inside of us and every society has that and every person in power has that, and we try to find our way and that's what I try to tell our teachers. It's not, it's not there's no help in just saying this is a bad thing. No, it's not. The bad behavior makes it a bad thing and not the thing itself.
I think that's a very important starting point and then of course we have all kinds of difficulties with the tool. Of course we have to talk about academic integrity, so we have to talk about the responsible use of the tool. I would still call it a tool although it's like more like a companion as you said, but nevertheless it's a question of talking about it, know a lot about it, but definitely not try to shy away from it because that happens as well. And did you end up developing a policy suite or a set of guidelines with you? Yeah, we had to. How long did that take? We reacted like super early.
So we had the first guideline in place. I would say it was like January, twenty, like November twenty, that when Gen AI emerged and in January afterwards we had it we had the first policy in place where we said you have to declare the use of AI for our students. And then it was kind of unclear what declare meant. So three months later we had a new policy that said you have to have all the prompts submitted that you used and the student said come on. That's a good one.
That's brutal. It it didn't work at all. So we came back, we took a step back, and now we're more or less finding our way with how to cite AI, although it's technically not a citation because it's not an author. But still there is so much, evolving and we just try to stay agile and flexible and react towards what's popping up. Yep.
And I, again, it's a wonderful example of how it's challenging the way we've typically tried to regulate inside institutions because it's changing so quickly. We've got to be prepared to be agile in our policies and practices as well. Sarah, your turn next. Ethical and responsible usage. What what are you doing to make sure it's used properly? Well, in our case, it's a little bit special because AI is part of the curriculum in at SD.
But, when I personally think about the ethical use of AI and the in the context of a classroom and I automatically think of the students using AI to directly cheat in Mhmm. Exams or assignments, there's some big questions that, assaults me. Like, if we consider AI as cheating in this assignment, for example, what are we really trying to teach the students with these assignments? What was the purpose of these assignments? And if the answer is yes, I do consider that if you are using AI to complete this assignment, you are cheating. Maybe what we need to do is to change the question and to redesign the evaluation parameters so that the use of AI is part of the learning process. Because if we don't, I think we're going to hit a wall repeatedly.
I think we need to go one step further and assume that our students are are using AI because in fact Mhmm. Spoiler, they do. And yeah. And we need to do something about it. And I do think that the responsibility is on us as educators and part of the educational, institution to redesign, for example, if the I don't know what you what you think, but take the most difficult example that the purpose of this assignment was that this student would memorize a bunch of data.
And that's tricky because no one can beat AI in data. But maybe we can redesign this so that we are also promoting research and critical thinking and innovation so that at the end we have a more appealing way for this student to assimilate this information. So I don't know, maybe it's a little radical for some for in some cases, but I do think that this is such a powerful strong revolution and this is technology, is going to change so many things about how we work and how we learn that we need to work with it instead of against it. No. I think that's a fantastic answer and I remember the same debates going on when the browser first going mainstream.
You mean they're not gonna have to walk through stacks of books to find the answers? No. But that'll destroy learning as we know it. Well, we know how that movie played out. Leon, ethics and responsibility. Well, first of all, we were also, just as you guys, very early with our guidelines.
Right? But after that, I noticed something differently. So we have a school of about forty thousand students, a lot of different institutes. And all the guidelines from the different institutes, they always tell you, you can use generative AI and our superpowers are not affected by it. So it's also something like they don't wanna see that they actually have impact on their institute as well. We at Open Learning, we created an AI act, more of a pamphlet we use to say you can use whatever you want in tooling to prove that you are competent on these learning outcomes, but we will assess you by meaningful conversations.
Yeah. Because I wanna know what you know and I don't want to have a product that says that you know something that you can generate, which is great. Yeah. Generation is fine, but I want to talk to you to know for sure and that works pretty well. I'm also from an ethical AI, so I try to help with my students to design stuff that is not that ethical just to, how do you say it, to get other researchers to think about ethical stuff because they see that.
So it's a little bit of a punk situation going there. But it's a very important part, we think very, very long and hard about data and privacy and that data is not for the school, but it's for the student and it's for the teacher. It's a long way to go, I'm for sure. I'm so glad you mentioned that though around the agency of the student, and, you know, I'm just a firm believer in anything that we do that involves personal data, that we have full transparency, informed consent, and we never lose sight of the fact that it's their data, not ours. And what we do with it, we need to be both respectful, ethical, and transparent in the way that we go about doing that.
Interestingly though, when you look at the history, certainly the space I've spent more time in, higher education, when you've been upfront with students about what you want to do and why and give them informed consent, well over ninety percent normally say absolutely because it makes sense to them and they like being treated as, as an actor with agency in the process. So I think all of you, I also love the fact that you've talked heavily about we can't hold it back because it's convenient for us to assess ways that we've assessed historically. We have to be prepared to assess in different ways because that's the world, that's the way the world is going to judge them and judge them as graduates. So thank you for bringing that out. We've had a couple of sessions today on assessment and I think generative AI creates incredible opportunities to assess in more authentic and meaningful ways, but it certainly is going to be used by some as a way of not of not progressing because they will use fear as a way of holding back innovation.
So I think that's terrific. So we have four minutes and eight seconds left, and I have my favorite question that I like to ask at the end of every panel. If you had a magic wand and you could change one thing about how generative AI is being rolled out by technology vendors like Instructure and others that are exhibiting out here today, what would it be? Magic wand, one thing, advice to a lot of our colleagues here today on the vendor side. Sarah, you get to go first. Leon, you get to go second.
And Jacqueline, you get to have the last say, which lawyers always love to have. Sarah. Okay. I think, the first answer is clear is the data privacy and how are we going to, be informed about this. Mhmm.
How can we be sure what, where our data goes? And I have seen some, sessions before that were talking about that, and I found it very interesting. Also, taking into account our European legislation with the GDPR, that's even more tricky. And, taking into account, that, these large models that is based the majority of the vendor tools are based, are being developed by external huge companies. So we really need clarification on that. And I think we may agree, three of us, on this point.
Yeah. That's, that's wonderful. So the whole absolute understanding of where our data is being taken once we we release it into these models. Alright. Well, magic one time.
Well, we talked about a lot of stuff already, but we'll bring back the to empower people to use those systems and to actually get to the most creative part of themselves, is one of the things I wish for everybody. And also it needs to be private and it needs to be that my data is my data as we talked before. So pushing on the creativity just a little bit, Leon. Are you saying that the way it's exposed to the student or the teacher should be done in a way that allows them to really maximize their own value from it. So it's natural as it can be for them based on who they are.
Is that That's that's what personalized learning should be. Right? Absolutely. And I think those tools are, yeah, just waiting for us to use them to give them more personalized options. Thank you. Sorry to push on you a little bit.
Last word, Jacqueline. Yeah. My magic wand. It came to my mind this morning. The one attribute that was right with me is old, so I could remember kit, you know, the car of Knight Rider.
So what I really want is, if I'm looking at the Canvas people, that I could just talk to my LMS and it would know immediately what I want from it and what it needs to do. So maybe not so far in the future we have this. I mean, I love already working with the app of ChatGPT when I just take pictures and talk to it. So here we go. I have a magic wand.
Yeah. I think they're they're all wonderful, by the way. And that that natural human interface is without doubt the next frontier. I I joke that my grandchildren, if I have them one day, will laugh at the fact. They'll look at it and say, you actually had to touch it and move your finger to get it to do something.
And they'll be staring at a keyboard like Anne Marie was staring at a VHS cassette. Like, what the hell is this thing that I'm looking at? And why would I ever dream of using it? And for us, it feels a little far fetched now, but I guarantee you fast forward five or six years, those ways of interfacing with technology will feel like something that we should have let go a very long time ago. Can I just thank the three of you for your preparation for today, for the way that you are courageously going out and experimenting, but doing it with the guardrails and awareness to know when it shouldn't be, shouldn't be used as best we know, when we should carry on and keep going, when we should pull back because it's not turning out the way that we want? But more than anything, what you've demonstrated to me, and I'm sure you have the audience, is that everything that you're doing is being driven out of a genuine concern to maximize the outcomes for our students and our people. And I think as role models, you've just been inspiring for what we all need to do, which is to go out there, get started, do it the right way, be courageous because that's the world that we now live in.
We're in Manila just a few weeks ago running it here, and now we're here in the wonderful city of of Barcelona. And as, Anne Marie mentioned earlier today and other speakers have referenced, they all get it got to go a few minutes into their presentation before they they talked about artificial intelligence or generative AI. We have the great pleasure, my colleagues and I, of actually spending the entire panel on that topic. And it's it's really about using AI as a force for good. There's absolutely no doubt that if you're dialed in to what I believe is the greatest shift of technology in our lifetimes, certainly as it relates to education, and I turn sixty in just a few weeks' time, so I go all the way back to the first personal computer being applied in education.
There is much to be said that could go wrong, but there's no time to wait. As Anne Marie mentioned this morning, the pace of change driven by generative AI doesn't lend itself to those historic evaluation cycles that we have loved in education, where there would be analysis, there would be pilots, there may be multiple pilots, then there would be debate, then there would be business cases, then there would be approvals, then we there would be implementation, then review, and then it begins again. Not this time. This time to steal, as Anne Marie did, from agile methodology, this time there is only one way forward and that is to have the courage in safe ways, in good ways, to be getting to get started, to find the use cases, to experiment, to learn, to fail, to try again, to apply because there's absolutely no doubt that for our graduates to have the knowledge, skills and attributes that they need, no matter what age they leave our institution, this is going to be a very big part of their lives moving forward. And that's what we're going to talk about today.
This panel has been structured around three wonderful individuals from three wonderful institutions doing some pretty courageous experimental work with the application of our generative AI. They're gonna share their case studies. They're gonna share their their stories. They're gonna share what's worked and what hasn't worked. And we hope that will both inspire and challenge everybody in the room to go back and reflect on what are you doing in the area of AI? Are you moving fast enough? Do you have the right guardrails in place? And are you focused on the right things? So that's the goal.
That's the goal. So we're gonna dive in. I'm going to let Sarah, Leon and Jacqueline sort of introduce themselves to you, as as we move through the the first questions. And, we won't be taking questions from the audience, but we will all be available for the rest of the afternoon if you wanna come and find us over coffee and snacks, to pick our brains or to find out any anything more. So sit back, relax.
You're in for a treat, and away we go. So you ready panel? Alright. Good. So to kick things off, as I mentioned, I want you just to sort of briefly introduce yourselves and maybe, at a headline level to to begin with. Give us a little bit of a sense about how you're ensuring AI is being used as a force of good for your for your institutions.
So, Sarah, do you mind going first? Sure. Thank you. You're welcome. Martin. Well, hello everybody.
First of all, I wanted to thank Instructure for letting me have part on this panel of amazing experts. I'm sure we're all going to enjoy it. My name is Sara Garcia. I'm a graphic designer and, I have recently specialized in the use of artificial intelligence applied to design and creativity. And now I am working, for ISTE which is a digital business school based in Madrid, Barcelona, and Mexico.
And addressing your question, Martin, so how are we applying the artificial intelligence? I need to go back in time a little bit to fifteen years ago, that's when EASTI was founded. And back then, there was a huge technological revolution going on. The eruption of internet, the digitalization of business and jobs and life itself at the end. And so ISTE was born with a mission to help students and to help partners to master this new technology so they will be able to adapt and to survive in this new reality. So that rings a bell, right? That's very similar to what we're looking at right now because artificial intelligence is indeed a technological revolution, so of course, it's needed to step up and help their students and their partners master these skills so they can once again adapt to the new reality.
But since this is, environment of continuous change, what we need first is to learn ourselves. So the main goal right now in our institution is to, first learn ourselves about AI and how it can be, rightfully applied, and then help our staff and our teachers and our students. I don't know if it's something Keep keep going. It might just Okay. Okay.
To to adapt to adapt to this reality that we're living and to see, and that's the most important key, to see AI as not as an end, but as a means. Wonderful. Now you mentioned as part of that, it sort of has to start with us. Sure. We have to understand it first.
What's the one thing you've changed in your life to better understand how to use generative AI? Thank you. I'm double microphone now. You you are. Well, good good redundancy. What what have you done differently to understand it? Well, I have, or I'm trying to integrate the use of AI in my, workflow.
And for that, I don't mean that you should use AI for everything because, in fact, you shouldn't. Please don't. But, when you integrate artificial intelligence as part of your own workflow and you start seeing it as an assistant rather than as a tool, your way of work change. And maybe I can elaborate on that Perfect. Further on.
But Yeah. At the end of the day, the the key is that I am trying to use it not as an extra Sure. But as part of it. Very good. Thank you.
Okay, Leon, your turn. Introduction and a headline. So my name is Leon, von Boekhoofst. I work at Fondis ISD, in the Netherlands. So we have to do with AI from a student perspective, our own perspective, societal perspective because we do our learning so always within context of society.
And Sarah, you are absolutely right, it's not a tool, it's actually something you need to relate to and we need to learn those new skills and that's what I'm doing quite a lot of research on at this point by actually doing continuous improvement and building little proof of concept and seeing what works and what not works. And a lot of stuff works, so we are actually very excited about that. How do you have the courage to stop the stuff that doesn't work? We're not typically very good at that Yeah. In education. How do you stop the things that aren't going so well? Yeah.
You need to also stay ahead of that curve. Right? Within AI, it's even harder to stop because it can always be better or if you can think it will be better. So you need to have a very strong focus of what you wanna wanna get at. Mhmm. And in my case, it's actually empowering students, but also teachers to learn from those tools instead of only using it to make our work easier to make us as professionals better.
And that's a very interesting field of study. Yeah. I think that's a really great answer too. You know, one of my favorite quotes is from Lewis Carroll, if you don't know where you're going, every road will get you there. So intentionality, being clear about what it is you want to achieve and measuring against that actually helps you make that decision.
Do we continue or do we pull back? Thank you. Alright, Jacqueline, your turn. Alright. I'm Jacqueline Gasserbeck from University of St. Gallen in Switzerland.
I'm, as you can see, not an old white bearded person and I'm not even a techie. I'm not a math mastermind at all. I'm a lawyer. So why am I on this panel? Just because I'm a very, very curious person and, I love tech, I love to play around, I love to experiment, and I think that helps a lot when you have to deal with AI. So they gave me this little project like AI in teaching and learning.
But I think you were asking for a headline, so the headline for me would probably be what they are calling me. I'm the AI cheerleader at my university. So that's also something you need to do. You need to get people, students, faculty engaged to talk to each other, to experiment, to find out what's good for them and what's not, when not to use it and when to use it, how to fail and stand up again and try again. So I I would say that's my job.
Oh, that's wonderful, Jacqueline. I've run two very large universities and I always went to look for my cheerleaders, the people that I call the enlightened willing. And they sort of got out ahead and then there were the fence sitters who would watch to see what happened and that's typically how you you move them along. So good for you. And actually, I don't think it's strange at all that you're on this panel as a lawyer because it's actually the legal profession that's been disrupted fundamentally by generative AI.
I'm one of the first to be fundamentally disrupted. So I'm not not surprised at all, and you're very, very welcome to to be here. Alright. Thank you for your headlines and your introductions. We're gonna let you go a little bit deeper now on the projects that you're so proud of.
So can you bring to life for us a project that that you're working on for your institution that you are really proud of and really help us understand why. Why have you chosen this project to talk to all of these wonderful people about today? What is it that gets you really excited about it? And Leon, I'm gonna go to you first this time. Well, I think empowering is very important. We often tend to give our students exactly what they need because we think what they need and I think it's more useful to them to learn how they can build the tools themselves to do whatever they want with the tools. And that's a tool I built, I built a double feedback loop tool where the student actually needs to be, to take ownership of the feedback they get from our cultures.
And the tool actually helps the students to figure out if they really understood what the teacher said. But the other side is also important. So professionalization and lifelong learning also means we as teachers become students again and we need to learn from our work as well. We are experts, of course, but we can always learn. So if I have a feedback conversation with a student, the student has some, advantage of using AI, but me as well.
I get feedback on my feedback. Mhmm. And I have some blind spots, I can tell you that. So there's actually a double loop that's going on there and you can go way further with this idea, right? It's always interaction and learning from that interaction. And you can use Jet AI especially very well-to-do that.
So I'm very proud of that part of experiments we do all the time in civics nowadays. No. Thank you. So help me understand it, Rit. So I'm one of your students.
How will I interact You're welcome. Oh, that would I'd love to be actually. How would I interact with the with I'll call it a tool, but whatever you you call it, how would it be exposed to me? How would I use it? So this we we get a recording of our feedback conversation. Yes. There will be some transcription.
And the tool is actually designed that the student goes to Canvas and we have a tool there from the beautiful people of Dream. It's called, Feedbills. And they write down what they got from the feedback session. The tool tries to figure out that he understands everything, and can give me also a sign. This student, he forgot about this, the past three, feedback sessions, for instance, so I can notice that.
And I get a report of my way of giving feedback and where I should improve maybe or what I did correctly as well over a longer context. So more feedback sessions, getting more rich and rich in-depth relation. And I want I'm just thrilled that you chose feedback because in all my time as an educator, when I've talked to our students about what they value the most and what they would like more of, it's feedback comes up in the top three every every time. Have any of your teachers been offended by the feedback they've got back about their feedback? That's a lot of No. Not at all.
Not at all. It's it's wonderful to see your blind spots. Right? There are blind spots, so you don't see them. And you always think it's a one on one. Right? So the student won't give you that feedback most of the time, but the tool can.
Yeah. And the tool knows the tool knows nothing, but it really knows our educational vision, which is also inside of there. And the meaningful conversation is part of open learning. I think it's wonderful. And by the way, I can speak from experience, not everybody enjoys feedback.
I have three daughters when I give them feedback. It's not always welcomed. Trust me. Okay. Thank you very much, Leon.
Jacqueline, what's the project you're really proud of? Yeah. When we prepared for the talk, I had this one project in mind, but then, yeah, now I have two in mind so I will talk about this. Inspirational. So the one project I'm actually all excited about right now is our law prompt a thon that is going to happen next week. So it's like a hackathon but with prompts.
So we bring together students and faculty and practitioners like lawyers and they have to solve challenges like law challenges, one from the court, one from practice, and it's all meant to be a big happening and they can experiment and talk to each other and find out, find the best model, find the best prompt and then compete against each other. The other one that was initially there is very practical. It goes, it works very well with Canvas. So we did a little workshop with faculty how they could make their own rubrics, Gen AI generated rubrics that they could easily implement into Canvas and that was also very well received by faculty because it was helping them in their daily work. And so that makes it easier for me if I can sell something that is really useful for them.
It's not just prompt engineering. Nobody wants to talk about prompt engineering anymore. So you need a purpose, you need a use case, you need to make it fun in the best place. They're two great, great, examples that you've shared. On the prompting, what I'm finding as well more and more is it's giving the Gen AI the context before you even give the prompt.
Exactly. So one of my favorite techniques now is to ask it to slow down, take its time because this is really important to me and I want its best effort. And according to the literature that I've reviewed, if you do that, you can up the quality by at least fifteen percent of what comes back. And and so I always take the time to give it as much context of what it is that I'm working on. And and one of my favorites now is when I start off, I say, hi, it's Martin.
You remember who I am, don't you? And it comes back with an ever richer summary of me each time I do it. And of course, I'm doing that because I want to anchor it immediately in all that it knows about me. That's that's really that's really super. And I agree with you. Prompt engineering had a shelf life of about five minutes.
There's a lot of disappointed people with their LinkedIn profiles that are updating them right now, but, because it really didn't didn't last, very, very long. So thank you for those, those examples. And your second one, which I think we are all very aware of in this audience, is that there's a common characteristic of teachers K through twelve, primary, secondary, tertiary education everywhere in the world, and that is they are time poor. I don't think I've ever met a teacher that didn't want to do the best for their students they could, but I met most teachers that tell me they'd love to do more, but they just don't have any time. So what you talked about in terms of being able the rubrics and being able to automate that, I think is a wonderful example.
And if you look at most of the early use cases from Instructure with Canvas, you'll notice a lot of them are being designed to help save time of people to unlock that potential into other areas. So thank you. Alright, Sarah. What's your favorite? K. In my case, let me take your question and turn it around a little bit because rather than talking about the project that I have done and that I am proud of, I would like to talk about something that I I am willing to work on and we are currently working on at ISTE, and it's very related to what, Jacqueline and Leon said and also, what you have pointed out right now.
It's about this, saving time and this getting feedback and this concrete things that, we are getting with AI is what I call integrating it in your workflow. So, when I when when the goal is to educate people to know how and when to use AI, and that's our goal, I'm not talking about knowing one hundred AI tools that can be used for this and that and these kind of things, but, to really, working with it in all the in in every aspect of your of your work that can be improved. And when you do that, you receive two wonderful things. The first one is time, as you were saying. Yeah.
You save time, of course, because something that may took you hours to complete, now with a few clicks, you can have it. And the second one is, that you discover an a new way of working. So on one hand, when you are saving time as an educator, as an instruction designer, or whatever your job position is, the real question is, what are you going to do with this time that you're saving? And related to what you were saying before, I think one right answer will be to innovate, to take this time that you're saving and effectively improve all the things that you always wanted to improve about your classroom or your institution or your job, but you couldn't because of bureaucracy and paperwork and this kind of thing. So this is something that you can take this time and invest it in your vocation because this profession is vocational. So this for me is something that maybe was not impossible before but truly harder than than now And that's amazing.
And the other thing is that, for example, let me give you an exal an example. If you're, using AI to do brainstorming, you, are having a conversation first with yourself because as you said, you need to write down Mhmm. The context and you need to structure your thoughts in order to get what you're looking for. And then you are having a conversation with a machine that has access to millions and millions of data that you by yourself couldn't reach. So that makes that your train of thought by iterating and being patient with the machine, takes you to places new new places.
And the combination of these two things, the time that you're saving and these new ideas that, gets to use with the AI, is giving you the opportunity to work on what you do love most about your your job. And a key concept that I would like to highlight because I think it's very, very important is that in my experience and the ones around me, the the experience that I saw, When you work with AI on your own workflow and at your own path, the best ideas are not the ones that were given to you by the AI, but the ones that were inspired by the by the AI and then developed by human experience and human creativity. So that's why I like to say that, AI sparks creativity. But, in fact, it's a human work to take this spark and make the fire. Yeah.
I think a wonderful point. In fact, in some of my speaking I talk about is generative AI sort of drives us all back to mediocrity and sort of takes the laggards and makes them better and takes the supposed experts and makes them not quite as expert as before. What we're actually left with as our advantage is our enduring human capabilities, those things that are uniquely ours that we then get to bring to the forefront. And I think what you've described is a wonderful way of bringing that, bringing that to life. Two of my favourite questions I like to ask are, what haven't I asked you in this conversation that I should have, is one that always comes back with things that get me thinking, in a different way.
Or is there another way that I could have approached this exercise instead of the way that I've approached it? And in both cases, what I typically get is my brain gets triggered to think in different ways. Now the machine hasn't told me necessarily how to be innovative and think differently, but it's sort of, as you said, flipped a switch for me to go, oh, I actually didn't think about it that way. So I think that that's really super. Jacqueline, a topic that I'm sure is on all of our minds because we're an education audience, there's clearly a lot of responsibility in using generative AI in contexts like education and health, and elsewhere. It's like that, great Spider Man quote, you know, with great power comes great responsibility.
What what are you doing to ensure that AI is used ethically and responsibly in your institution? Yeah. What I usually try to explain to students or faculty that for me AI is a little bit like a Swiss army knife. So you can use it as a mom to carve beautiful animal creatures out of carrots or you can fix the bike on the go or you can use it to cut as a seal, like a safety seal or even to hurt someone. So it's not the Swiss army knife that is good or bad, it's what we do with it. So it's the same with AI.
We have to learn how to responsibly use it just like we learn small children to use a knife. I mean there is nothing different and we have all these dark and light sides inside of us and every society has that and every person in power has that, and we try to find our way and that's what I try to tell our teachers. It's not, it's not there's no help in just saying this is a bad thing. No, it's not. The bad behavior makes it a bad thing and not the thing itself.
I think that's a very important starting point and then of course we have all kinds of difficulties with the tool. Of course we have to talk about academic integrity, so we have to talk about the responsible use of the tool. I would still call it a tool although it's like more like a companion as you said, but nevertheless it's a question of talking about it, know a lot about it, but definitely not try to shy away from it because that happens as well. And did you end up developing a policy suite or a set of guidelines with you? Yeah, we had to. How long did that take? We reacted like super early.
So we had the first guideline in place. I would say it was like January, twenty, like November twenty, that when Gen AI emerged and in January afterwards we had it we had the first policy in place where we said you have to declare the use of AI for our students. And then it was kind of unclear what declare meant. So three months later we had a new policy that said you have to have all the prompts submitted that you used and the student said come on. That's a good one.
That's brutal. It it didn't work at all. So we came back, we took a step back, and now we're more or less finding our way with how to cite AI, although it's technically not a citation because it's not an author. But still there is so much, evolving and we just try to stay agile and flexible and react towards what's popping up. Yep.
And I, again, it's a wonderful example of how it's challenging the way we've typically tried to regulate inside institutions because it's changing so quickly. We've got to be prepared to be agile in our policies and practices as well. Sarah, your turn next. Ethical and responsible usage. What what are you doing to make sure it's used properly? Well, in our case, it's a little bit special because AI is part of the curriculum in at SD.
But, when I personally think about the ethical use of AI and the in the context of a classroom and I automatically think of the students using AI to directly cheat in Mhmm. Exams or assignments, there's some big questions that, assaults me. Like, if we consider AI as cheating in this assignment, for example, what are we really trying to teach the students with these assignments? What was the purpose of these assignments? And if the answer is yes, I do consider that if you are using AI to complete this assignment, you are cheating. Maybe what we need to do is to change the question and to redesign the evaluation parameters so that the use of AI is part of the learning process. Because if we don't, I think we're going to hit a wall repeatedly.
I think we need to go one step further and assume that our students are are using AI because in fact Mhmm. Spoiler, they do. And yeah. And we need to do something about it. And I do think that the responsibility is on us as educators and part of the educational, institution to redesign, for example, if the I don't know what you what you think, but take the most difficult example that the purpose of this assignment was that this student would memorize a bunch of data.
And that's tricky because no one can beat AI in data. But maybe we can redesign this so that we are also promoting research and critical thinking and innovation so that at the end we have a more appealing way for this student to assimilate this information. So I don't know, maybe it's a little radical for some for in some cases, but I do think that this is such a powerful strong revolution and this is technology, is going to change so many things about how we work and how we learn that we need to work with it instead of against it. No. I think that's a fantastic answer and I remember the same debates going on when the browser first going mainstream.
You mean they're not gonna have to walk through stacks of books to find the answers? No. But that'll destroy learning as we know it. Well, we know how that movie played out. Leon, ethics and responsibility. Well, first of all, we were also, just as you guys, very early with our guidelines.
Right? But after that, I noticed something differently. So we have a school of about forty thousand students, a lot of different institutes. And all the guidelines from the different institutes, they always tell you, you can use generative AI and our superpowers are not affected by it. So it's also something like they don't wanna see that they actually have impact on their institute as well. We at Open Learning, we created an AI act, more of a pamphlet we use to say you can use whatever you want in tooling to prove that you are competent on these learning outcomes, but we will assess you by meaningful conversations.
Yeah. Because I wanna know what you know and I don't want to have a product that says that you know something that you can generate, which is great. Yeah. Generation is fine, but I want to talk to you to know for sure and that works pretty well. I'm also from an ethical AI, so I try to help with my students to design stuff that is not that ethical just to, how do you say it, to get other researchers to think about ethical stuff because they see that.
So it's a little bit of a punk situation going there. But it's a very important part, we think very, very long and hard about data and privacy and that data is not for the school, but it's for the student and it's for the teacher. It's a long way to go, I'm for sure. I'm so glad you mentioned that though around the agency of the student, and, you know, I'm just a firm believer in anything that we do that involves personal data, that we have full transparency, informed consent, and we never lose sight of the fact that it's their data, not ours. And what we do with it, we need to be both respectful, ethical, and transparent in the way that we go about doing that.
Interestingly though, when you look at the history, certainly the space I've spent more time in, higher education, when you've been upfront with students about what you want to do and why and give them informed consent, well over ninety percent normally say absolutely because it makes sense to them and they like being treated as, as an actor with agency in the process. So I think all of you, I also love the fact that you've talked heavily about we can't hold it back because it's convenient for us to assess ways that we've assessed historically. We have to be prepared to assess in different ways because that's the world, that's the way the world is going to judge them and judge them as graduates. So thank you for bringing that out. We've had a couple of sessions today on assessment and I think generative AI creates incredible opportunities to assess in more authentic and meaningful ways, but it certainly is going to be used by some as a way of not of not progressing because they will use fear as a way of holding back innovation.
So I think that's terrific. So we have four minutes and eight seconds left, and I have my favorite question that I like to ask at the end of every panel. If you had a magic wand and you could change one thing about how generative AI is being rolled out by technology vendors like Instructure and others that are exhibiting out here today, what would it be? Magic wand, one thing, advice to a lot of our colleagues here today on the vendor side. Sarah, you get to go first. Leon, you get to go second.
And Jacqueline, you get to have the last say, which lawyers always love to have. Sarah. Okay. I think, the first answer is clear is the data privacy and how are we going to, be informed about this. Mhmm.
How can we be sure what, where our data goes? And I have seen some, sessions before that were talking about that, and I found it very interesting. Also, taking into account our European legislation with the GDPR, that's even more tricky. And, taking into account, that, these large models that is based the majority of the vendor tools are based, are being developed by external huge companies. So we really need clarification on that. And I think we may agree, three of us, on this point.
Yeah. That's, that's wonderful. So the whole absolute understanding of where our data is being taken once we we release it into these models. Alright. Well, magic one time.
Well, we talked about a lot of stuff already, but we'll bring back the to empower people to use those systems and to actually get to the most creative part of themselves, is one of the things I wish for everybody. And also it needs to be private and it needs to be that my data is my data as we talked before. So pushing on the creativity just a little bit, Leon. Are you saying that the way it's exposed to the student or the teacher should be done in a way that allows them to really maximize their own value from it. So it's natural as it can be for them based on who they are.
Is that That's that's what personalized learning should be. Right? Absolutely. And I think those tools are, yeah, just waiting for us to use them to give them more personalized options. Thank you. Sorry to push on you a little bit.
Last word, Jacqueline. Yeah. My magic wand. It came to my mind this morning. The one attribute that was right with me is old, so I could remember kit, you know, the car of Knight Rider.
So what I really want is, if I'm looking at the Canvas people, that I could just talk to my LMS and it would know immediately what I want from it and what it needs to do. So maybe not so far in the future we have this. I mean, I love already working with the app of ChatGPT when I just take pictures and talk to it. So here we go. I have a magic wand.
Yeah. I think they're they're all wonderful, by the way. And that that natural human interface is without doubt the next frontier. I I joke that my grandchildren, if I have them one day, will laugh at the fact. They'll look at it and say, you actually had to touch it and move your finger to get it to do something.
And they'll be staring at a keyboard like Anne Marie was staring at a VHS cassette. Like, what the hell is this thing that I'm looking at? And why would I ever dream of using it? And for us, it feels a little far fetched now, but I guarantee you fast forward five or six years, those ways of interfacing with technology will feel like something that we should have let go a very long time ago. Can I just thank the three of you for your preparation for today, for the way that you are courageously going out and experimenting, but doing it with the guardrails and awareness to know when it shouldn't be, shouldn't be used as best we know, when we should carry on and keep going, when we should pull back because it's not turning out the way that we want? But more than anything, what you've demonstrated to me, and I'm sure you have the audience, is that everything that you're doing is being driven out of a genuine concern to maximize the outcomes for our students and our people. And I think as role models, you've just been inspiring for what we all need to do, which is to go out there, get started, do it the right way, be courageous because that's the world that we now live in.