How Assessment Practices are Changing in the Age of Generative AI Tools

Share

With the rapid development of AI writing tools, the education community had to quickly learn about ChatGPT and navigate its use in the classroom. In this session, we’ll examine the impact of AI writing on assessment–from reviewing academic integrity policies and practices to implementing a new approach to writing assignments.

Share
Video Transcript
My name is Gretchen Hansen. I am a I work at Turnitin. I am director of product marketing. And I have been in education for more than twenty years, through various roles starting as a librarian at the University of Maryland. So I have a lot of, interest and expertise and, well, expertise. Let's put that in quotes in assessment and education and how we as educators can continue to grapple with, you know, the challenges that we face.

And I think we can all agree in the last, you know, three ish years. We've had a lot of challenges come down the pike different than we've really ever experienced before. So lots of, change to embrace. Alright. So today's session is, called how assessment practices are changing in the age of generative AI.

So I wanna start us off with a little quiz. So every year, Titan Partners does a survey of educators and students to gauge the sentiment of, you know, what are the, what's coming in, what's top of mind coming into the next year? So in my question here, in two thousand twenty three, this year, The top concern for educators was how to effectively and efficiently deliver student feedback to students. True or false. False, you are right. It is false.

Actually, in twenty twenty two, that was the top concern. We were very concerned with getting students feedback to understand how to close that gap that in a lot of ways, COVID introduced through distance, through everything being online. There was a lot of concern about how we communicate with two students and how we keep you know, getting them the feedback that they need and the ways to, you know, get the learning outcomes that that we need. But this year, things have changed, and the top concern in twenty twenty three is how to prevent student cheating. So Interestingly, in twenty twenty two when they do that survey, how to prevent student cheating was the tenth concern.

So it grew. It climbed the charts rapidly. And what do you think precipitated that? November thirtieth twenty twenty two. This is a day that is marked in infamy at a turnitin particularly, but, I think all around the education world because that was the day that Chat GPT launched. And, I think we we all got on the roller coaster that day.

And I think the the reactions are really varied. You know, I I love this photo because it really sums up how we as educators feel. Some of us are really excited, but some of us are really scared and that sentiment can go back and forth. Multiple times in a day, in a conversation, in, reading an email. You know, it's it's all over the map.

And I think we're all kind of feeling, filling these feelings. So, you know, right off the bat, chat GBT and AI, the use was just prolific. There was a study done in January. So shortly after it was released, that said ninety percent of students are aware of it. Eighty nine percent admitted to using chat GPT for homework, fifty three percent said they used it to write an essay, and forty eight percent said they would use it for a quiz or a test.

Pretty staggering. I think that being in January probably has only gone up since then. Educator awareness, however, was last in in those early stages. It's probably gone up by now. But eighty two percent of professors were aware of it.

Seventy two percent were concerned about impact on cheating. Thirty four percent of all educators wanted to ban chat GPT. And twenty one percent were like, Hey, maybe we could use it in our classrooms. So right away, varied, varied reactions. And we saw it in the press, right? Widespread panic.

Everyone freaking out. Wasn't a day when a news item didn't come out, talking about the challenges that that AI and generative AI were bringing into our education spaces. But then things started to change. People started to have a more positive outlook. And thinking about other ways and alternate ways that AI could impact and enable learning and really equalize learning for students providing better access and more tailored experiences.

But still, the fact remains that when we surveyed, educators again in twenty twenty three during educause poll, seventy five percent still felt that academic integrity was at risk because of AI. And in that same survey or actually in the Titan, time class survey for twenty twenty three, forty six percent of students reported that they intended to use AI no matter what their institution said. They were just gonna use it. So the question is, are we as faculty prepared? And that's really the driving focus of what Turnitin does in our product development. Our whole our whole history has really been to provide tools to help faculty be prepared for academic misconduct issues within their classrooms and innovating in order to support those.

So way back in nineteen ninety eight started by tackling copying and past plagiarism that continued to evolve into developing solutions for collusion, for research misconduct, contract cheating or essay mills, even code plagiarism. And now we are tackling AI writing and how that, how that is going to be impacting you know, education and and misconduct in the future. Turned in is not new to AI. This didn't, descend on us in November, and we have just started from there. We have been using AI and in our products and in our research capabilities for a really long time starting in twenty fourteen.

Actually, when we started developing tools to help identify contract cheating or essay mills. So using forensics, forensics, some machine learning, some AI, tools in order to identify characteristics of contract cheated work. We also started working on, when OpenAI released the first version of GPT three, we have an AI department that immediately began researching it, began looking at how we could take that model and start to formulate it to address, you know, the education market and student writing. Twenty twenty one, our focus was really on developing tools to help with paraphrasing, AI paraphrasing. So at that at that point in time, quillbot, other tech spinners were really prevalent, and we were hearing a lot of concerns from instructors about, how do we find paraphrasing? How do we differentiate between this really important skill and this, you know, cheating tool.

And so our research development was really focused on paraphrasing. But then in twenty twenty two, let me make sure I'm on the right one. Twenty twenty two, we were still developing that, but we're also continuing to focus on, making some of those tools, especially the contract cheating available for folks. Alright. So now we get back to November when, or actually sorry.

Twenty twenty two chat GPT releases. We all have to shift gears and say, uh-oh, no eight, no paraphrasing. Now we're gonna do, generative AI. So in February, we felt comfortable with announcing that we were going to be able to release an AI, detection tool in March, right before we released it. OpenAI updated their model to G GPT four.

And then in early April, we went ahead and released our AI detector, in a preview mode for all instructors, because we were hearing very loudly from all customers that they really needed to be able to address concern. So it came out the door. Alright. So as I mentioned, Turnitin is not new to AI. We use it in lots of different ways.

The first one here, this AI writing detection, I'm gonna spend most of the time talking about, but I wanted to focus a little bit on some of the other things that we do within our products. That utilize AI, predictive AI. So these are ways to streamline workflows, to optimize instructor work, and make it easier for you to do the things you need to So things like answer grouping, where you can group like answers together, grade them at once, handwriting recognition, where you could ocr formulas, and be able to, you know, read them and work with them in bubble sheet grading. That's pretty spelled self explanatory, but being able to automatically grade bubble sheets. So Gradescope is one of the products.

This is the science product. That really helps with assessments and uses a lot of AI in order to power it and to make it really streamlined grading and provide fair and fast feedback to students, really quickly. So one of the things it does is as you're submitting paper written work handwritten work, handwritten work, we will go ahead and split those submissions automatically using AI. Is very handy, and then match them to your roster. So as you're importing your roster from Canvas, automatically match those submissions, so you don't have to do that that work manually.

Also using OCR and other techniques, we're able to group like answers together. So you can go ahead and grade each question one by one identifying where people are answering the same way, where there are differences, and you can dynamically change your grading in order to support, you know, some some, answer that you weren't necessarily expecting. And then bubble sheet scoring. So being able to do bubble sheet scoring all online without the need to have, you know, a big testing center or huge scantron machines. You can do this here, upload your key in those, those bubble sheets will be able to be scored, you know, immediately.

Alright. So now, generative writing. So let me talk to you a little bit about how we approached, AI in the writing space for essays. So as I mentioned, we were initially focused on paraphrase detection. This is a really complicated problem because, you know, paraphrasing is a skill that students need to have.

They need to be able to take information, understand how to, you know, distill that down, put it in their own words and paraphrase it. So it's an important skill to attain. But it was also something that we were seeing a lot of misuse of through, you know, sites like Coolbot. So we were initially looking at how to address this. These are some of the early designs and, not to not to get you too hyped up, but now that we have kind of tackle the the thing that became the next emergency, this is gonna be coming out in the next couple of months.

So we were back on track with with paraphrase. But AI writing became really, a top concern. And so, so we had to to shift gears a little bit to address AI writing and take that, you know, chat chat GPT model and be able to apply, you know, real life classroom activity into that in order to address and identify where we were seeing generated, AI work. For all of this work, you know, I think the thing that's really key and something that we're really taking seriously is be able to rapidly innovate in order to help you protect academic integrity at your institution. So we wanna make sure we're putting in safeguards to identify when there are integrity issues.

We are also specializing it for student writing. So we have twenty years of student papers that we can use in reference in aggregate. You know, no student data is at risk here. But Ray identify, we can identify what kind of writing is typical for students. So when you lay that up against that large language model, you're able to really kind of find very, you know, tailored responses that are targeted for student writing.

So I think that's really important. The other thing that was really key, especially in the release of our AI detector was collaboration with, with you all, with educators. So we are meeting with them constantly, you know, going over reviews of, here's the display we're looking at. Here's the the layout. Do you think we should release it this way? What's the appropriate approach? How do we respond to these types of questions? What do you want? There's one story they told about we had some designs where the the percentage was in red.

And they were like, no, no, no, no, no, you can't have it be in red because that means bad. And it's not necessarily bad. And we're good point. So lots of great feedback from educators to help us to make the best solution. And I think that's still gonna be happening as we continue to refine going forward.

And then most importantly, we wanted to make sure that was integrated into your workflow. So it's now a button within all of your similarity reports that you use today. That opens up a new window, so you can kind of review that separately. So it's available for everyone today. At the end of the year, we'll be removing it from all products and just putting into our originality products because not everybody wants it.

So we're just putting it for those who do. Alright. So this is what I mentioned where it is now in your normal similarity report. There is a little button here that indicates the percentage of that paper that is detected to have AI writing. You click on that button, and it takes you to a report where you see that same student paper, but only the portions that have been identified as AI writing are highlighted.

And so you can review those separately. You don't get them mixed up with your similarity report. Since April, when we launched, ninety eight percent of Turnitin institutions have gone ahead and enabled the, the work flow. So they're able to see that detector. Also since April, we have reviewed sixty five million papers.

So every submission that comes in gets reviewed of those sixty five million, two point one million, which is approximately three percent, were flagged as having more than eighty percent AI written. So which is actually in line with, what we see, with other types of cheating or or potential cheating. Contract cheating, for example, has between three and five percent of submissions that kind of hit those flags. Six point million of the sixty five million were flagged as having twenty percent. So a very small amount.

And I think that's, you know, we can have places where it's not necessarily, a match. Like, it could be a false positive match where it's text that could be deemed as AI writing, but it's not necessary. So you would wanna review those matches with a little bit finer detail. Alright. So with each of these submissions, what do you guys do? How do you evaluate them? How is this gonna enter your daily life? So I think the first thing to think about is how AI writing sits alongside the other type of potential academic misconduct that might be happening at your institution.

And to use ch tools, the tools that you have within turnitin in order to to do those checks. So the first thing to highlight is we do have an enormous database of content that every submission that comes in is compared against. So we are looking, with every one of those sixty five million as well as every other paper that comes in We're comparing that to forty seven billion web pages. A hundred and ninety million published works. So these are scholarly publications.

And one point nine billion student papers. And we're checking those for, for text similarity. We have really high quality of our published works, ninety seven percent of those are from the top journal. So the cream of the crop. We also have partnerships with o open access, provider core.

Which is basically like a telephone book for open access repositories. So we can go right to the best content. As I mentioned, we have been doing this for more than twenty years. So we have a huge repository we're comparing against, and it's comprised of all kinds of content. Books, journals, conference proceedings, preprints, patents, law, dissertation, everything is in there.

And this really can give you the the peace of mind that when you're looking for similar text, you are finding those matches. The other thing to help you is to find when text has been deliberately manipulated order to circumvent that similarity check. Some of you may not know, most of you probably do, that students will go to great lengths to get around turning in lots of times they will do very small white text. They will put in a lot of quote, like pie quoted material. They will even go so far as to install other keyboards and replace characters that look like, Latin characters or the language that you're writing in.

So It saves image in a PDF. Exactly. So there are lots of tricks that they know, and we, we have a tool here, this little flags panel, which you can see right there. Which identifies those and puts them into a separate space. You can kind of review them separately because that's a different, level of intent than, you know, just missing a citation within your within your essay.

And then of course, AI writing. So together, all of these three things will help to give you a good picture of how the student is performing their work, how it's coming together. Because I think one of the things that's happened with AI writing is that we were relying on the essay and the writing of the essay to let us know, you know, yes, they have the skills. They're thinking critically they are writing originally. And now with AI, that's not necessarily the case.

We gotta go a little further. We have to know how they got to that. We have to understand How are they thinking critically? What are they doing? What's going into the writing of that particular paragraph? And so all these things could be clues that you can use when you're having your conversations with your students if anything kind of trips your radar. Alright. So that's what you do with your submission.

So what about the institution? How many of your institutions have it figured out? Exactly. I think this is a this is a tough one. Right? People are still trying to understand the technology. They're trying to understand how to keep up with it. They're trying to understand how it fits into their current academic integrity policies, where they need to make changes, what are they gonna do? And this will probably take some time.

But I think it's wise for us to think about how we do incorporate AI writing into our academic integrity policies. And, we have a few suggestions for good ways to probably do this. So the first thing to do, and this is true for, anytime you want to try to create a culture of academic integrity at your institution, it needs to be a dialogue between all the stakeholders. It really works well when it's just top down and, you know, off you go. Wanna make sure that your administrators, your educators, your parents, and your students are aligned in the mission and the values that your institution holds.

And then it's important to make sure that there's a shared understanding among them so that when the student walks across campus, they're not experiencing a completely separate set of rules. And guidelines. You wanna be cohesive vocabulary should be similar or the same, have consistent values, and make sure that you are offering forums for conversation for people to be part of the process. You know, how many of you have students that participate in your honor, you know, on your honor boards? You know, making sure that they're part of that, process whenever there's an escalation can be really helpful to create those student advocates, to create parent advocates, and build up that that, you know, culture of integrity that you want to have for ethical students exiting your institution. Alright.

So now, how does this impact your assessment design? So let's go back to the classroom. We'll let the administrators go figure out how to incorporate into policies, but how do we do it in our classroom? And there's probably gonna need to be some changes to the way that we approach our assessment prompts and assignments. So let's go back to another quiz. We do a lot of research with instructors and with educators and ask them, you know, how they feel about things. So one of the questions we asked recently was, which of the following is a way that educators are adjusting to widely available generative AI writing tools.

Are they, a, assessing with oral exams, b, creating assignments tightly integrated with the course readings, C, asking students to cite using AI writing tools, D, asking students to critique AI writing outputs, or E, all of the above. I hear a lot of v. That is correct. All of these things are now in play. All of these things are viable ways to you know, bring AI into your curriculum and into your assignments.

There are, you know, we we wanted to dig a little bit deeper into the sentiments because every instructor and every, you know, kind of instructor's plan is gonna be a little bit different. And so we wanted to talk to them about what are their core attitudes and their approaches when they're looking at their upcoming course and how they bring AI into it, what their sentiments are. So Really two, two, you know, scales here. First, does this person reject or accept generative AI as a tool? And you can either really accept it or really reject it. On second, what type of of reaction are they having? Are they actively accepting or rejecting or are they passively accepting or rejecting? And so as we were talking with folks, you know, we we started to get some detail about how people were feeling about it.

And one of the things that was really clear was it didn't matter how you felt and what kind of quadrant you landed in Everybody really wanted to know, are students using AI writing in my class? And then I can do whatever I wanna do about that or whatever my institution dictates I need to do about but knowing was really important to all of them. The second was to understand What are you gonna make some changes? What's what's on the board for you as far as making changes? And we found that those who are passive the philosopher and the pessimist are really only gonna make the changes that they have to. Whether they vehemently accept it or vehemently reject it, if they kind of have this core belief that's the learning is in the students' agency and they can kinda do what they wanna do, and I'm just concerned about getting my coursework done and getting them going. So they'll probably make, some small adjustments to align with, explicit policies or institutional stances about, you know, the the institution or or the department or the course. They might improve their assignments or their prompts, and they might tighten up their grading criteria.

And I think you can imagine that the pragmatist or the those who are active and accept or actively accepted are gonna do a lot more. They're gonna really go go kind of all in. And then the last one was how did students use generative AI writing in their tools? So when they accept it, When they're like, Hey, you know, AI is here to stay. We gotta figure out how to handle this. They wanna know, how are they doing it? What are the tools that they're using, and and are they doing it in accordance with how I want them to do it? Like, are they learning and and using it as a as a formative tool versus just a way to quickly get their work done.

How much of what I see is the student and how much is AI? They really wanted to be able to understand that difference and that balance. How did the student get to that final response that they submitted? So this is a lot of show your work. You know, how did you get there? How did you bring these ideas together? Concepts together and how is that represented in the assessment itself? And that's a little bit different than what we were doing before with with assessments, especially essay assessments. And then finally, to what extent does the submitted AI, reflect their learning. We've gotta get to that point where we understand that they are getting those skills.

They're learning to be critical thinkers and original writers because that's what we want them to do. That's that's the assessing we're trying to do with an essay. And now we have to kind of maybe go a different way to get to that understanding that they are getting it, they do understand it. A couple of interesting quotes came out of this research we did. One quote was, AI is the lazy person's way out, but it also signals that a student is not ready for college level work and that the student is flailing and lost.

I have an optional extra credit assignment, which in which I provide a chatgy PGT generated response and ask students to decide if it is good or bad and justify why. So these two quotes, do you think that they were from two different people? One person on two different occasions or one person on the same interview, or neither. These are both made up. What do you think? Oh, that's a good one. Any any two different people.

C. This is the same person in the same interview. Who, you know, she's on that roller coaster. One day, it's like, this is the worst thing ever, but it's also amazing. I could do such great things with it.

They also said, my personal opinion is that AI generated responses have no place in my course, but I understand that students can use it for much the same way that they use Wikipedia to outline an answer. So, you know, there's a lot of spectrum here. And I think we just need to think about what does that mean in our course? What are we trying to assess? We're not just trying to, you know, tick the boxes and say, yes, we did it. The same thing we've done the last twenty years, and these students are heading out the door, you know, completely ready to go. We're gonna have to change our our our lens on it a little bit.

So as you're doing that, we have some suggestions for like what you can do in order to develop your assessments and write prompts that will help to transform your current curriculum, your current assignments. First thing to think about is, are your assignments original to your classroom, your department, your institution? Are these things that are really relevant to what they're learning today? Require critical thinking or reasoning, and how are you gonna measure that? Be very clear with them about what critical thinking, what original thinking looks like. And and and tell them what you're looking for. Like, don't make them guess. Make it very clear.

Encourage and require student voice. So if they're not already using student voice, that's a great way to, you know, keep them out of the the AI world because AI is not has no voice. It is very predictive. But student voice is important. And that's also what helps to you to understand that they are, you know, thinking originally, and then conveying that into that skill of writing originally.

Require your student to incorporate personal stories or authentic situations. So help them personalize it in their life, which will not only help their learning, but will, you know, kinda keep them out of that AI detection frame. List, require a list of verified verifiable sources or citations. That is helpful because if you don't have a citation for the thing you have that's, you know, could be an indicator that you got it from Chat GPT. So you wanna make sure you have verifiable sources that they are using and referring And typically, as an instructor, you know the sources for your material.

You understand where people are going to get things, and you would recognize if those are are viable ones. And then the last one, recommend or require students to include a reflection or a rash rationale for their approach to the prompt. So, you know, if you do wanna use a prompt or you wanna, ask them to use a prompt, have them provide rationale, have them talk about why this makes sense to them. And how, you know, what they're hoping to get out of it. And that can demonstrate that original thinking and that creativity that you want to foster in these students.

So Just a few suggestions to think about how you could modify what you have today, in order to get to that true nugget of learning outcome that you're seeking. Alright. So the last one is what to do for student success. And I would suggest that we start to inspect student writing with an eye toward formative experiences. You know, we don't wanna continue to be punitive.

That's not a that's not a positive space to learn in. You wanna help them to, you know, use these tools for good and know how to use these tools for good. And so I wanna show you an upcoming enhancement that we have that I'm really excited about. So you guys all know and maybe love, maybe don't love our similarity report. It is it is the thing that we are known for, but also the thing we get the most questions about.

People are always like, what's the right score? What's the right percentage? I don't know how to use this. My students are freaking out. Like, my my niece, who's a freshman in high high school, And she just discovered that I work for Turnitin, and she lost her mind. She was like, you're banned from the family. Like, So, like, and, so it's it's bad news.

I mean, I get it. I know the pain. So we have been hearing this for many years. And we are really excited to be able to show you that we are revising and reimagining this similarity report in a way that will hopefully provide formative tools for you to use with your students to give them that positive reinforcement, that positive feedback. Also, it looks prettier.

It's nicer. It's modern. It's innovative. We're gonna be able Innovate much more quickly with this. So as new technologies come forward, we'll be able to incorporate them, incorporate them much more quickly than we were able to do in the past, which is really helpful.

The second thing I wanted to show you is it's kinda small, but up here at the top, we have the similar report, which is kinda first area. Which we see the the result of. But also AI writing will be incorporated into this view, so it's all in one space. And then, the flags, which I showed earlier were, which were the kind of intentional, you know, cheat methods, all of that is right here. So you can navigate between them.

And if you have feedback studio grading and feedback will also be added in there. So all of your components are gonna be in one place to help you navigate through them. So I think that's super excited. Excited. Grades? I said grading and feedback.

Mhmm. Mhmm. You know it. Yeah. So you can do it through Canvas plagiarism framework, SpeedGrader, or you can do it with LTI one point three, which is the one we love the most.

So do that one. Just much more flexible. It's better. Go do it. Yeah.

Sure. Yes. So for institutions who do not have that, I guess, that contracts. Mhmm. Would this be important? Yeah.

So this is and that's the other great thing for for me personally because I'm managing a lot of products that all look a little different. So for me, personally, it's great because you, everyone is gonna have this same report experience for the similarity, which is really nice and it's helpful because, you only, you're only gonna get what, you know, you don't have to wait for features to come into your new product. We are making it easier for us to update things for everybody no matter what you subscribe to. So if you decide, you know what? We don't need originality with AI yet. You you still get some of the goodies.

And then if you wanna add AI, it's not a change. You know, you're not gonna be changing the interface or the experience for your instructors down the road. So make it a much more seamless to do. The next thing and this is kind of the boring step of these But the next way you simplify the settings, just making it easier to use, easier to narrow in and and all that good stuff to find those critical matches you wanna take a look at. But the best thing is this part right here.

So these are what we're calling match groups or match categories. So right now, what you get with your similarity report is just a percentage and sources. So you can see, this one had five percent. This one had four percent. This one had fifteen percent.

That's really all you know, but you don't know the quality of them. You kinda have to go look at each of one of them. We're using AI and machine learning in order to categorize those matches and help you understand what's happening on the student's paper so you can understand whether it's a skill gap or whether it's an actual plagiarism. So This is pretty small. You can't really see it, but the top one here is nineteen, not sort, not cited, or quoted.

So those are places where it's true, not sighted work goes. So you wanna look at those first. Is this these are problem areas. Learn how to sight, learn how to quote. Yes.

Yep. So it'll narrow it all down. The second one here is missing quotations. So where did they have, you know, a citation, but no quote that was aligned to it? Second one opposite of that, missing citations. Where did they have a quote, but no citation? And then the the last one, which is kind of the good news story here, is where instances where they did both site and quote properly.

So you can tell them they did it great. You did a good job. You're learning this. You're doing awesome. Yeah.

So, potentially. Same. Love for draft coach. Yes. Draft coach is awesome.

So, potentially. Potentially. Yeah. Exactly. Exactly.

So we don't have that on our roadmap yet, but this is sort of this, you know, fusion of all of these things. And also, like, getting student you know, real time feedback into draft coach is, you know, definitely our priority. So some of the stuff that today is not really useful for students in feedback studio will probably move over draft coach like okay. So that really excited about that part. It's gonna be great.

And then if you want to dig further, so if you wanna, you know, have a conversation with the students and be like, Hey, let's talk about these this great job you did with these seven sided quotes and matches. You can go down to these source cards, and it will filter them to just that category. You can say, look, you did a great job here, here, here, but let's learn how to, you know, add the quotation properly, or let's learn how to add the citation properly. And so you can really target your feedback to your students and make it much more meaningful versus, hey, guys, you gotta turn in less than thirty percent similarity score. You know, that's not useful.

Yes. Mhmm. So initially, it'll be just instructors, but eventually students, yes. Mhmm. Yep.

The other cool thing about this is we are rolling this out to you all very gently very gently. I mean, if you I don't know that day, but but but yes. Close. So we're depending on which product you use, you if you have feeding feeding, feeding, I would say feeding a grade back. Grading and feedback components will will hold it back a little bit for you, but if you just have similarity or originality, that will come out much more quickly or or originality check because it's just the similarity report.

We aren't dinking around with your grading and feedback tools, but that's coming up soon. The thing that's not on this screenshot is right up here is a little button that lets you toggle back and forth between this view and the view you're used to. So you can, as the instructor can choose, which one do I wanna look at? Which one's important to me? And if you're, you know, you're, like, I don't wanna change, you know, have to change for a while. Like, you can get used to it. But if you do wanna jump in, you can do that right away.

They will just have the classic view for now. We wanted to make sure that instructors were really comfortable with it before we unleashed it on the the populace. And then you were guys like, oh my gosh. So they'll they'll come, but it will be a little bit. So really excited about this in the way that can help you to think about academic integrity, think about how to make this a positive experience and promote student success.

While and not just like, oh, I see AI writing. Oh, I see similarity matches. You know, we wanna think about it more holistically. So, you know, basically that, like, understand the the the students work and how they're writing it with an eye to where similarity or AI detection might factor into the conversation you wanna have and that formative experience you want to create and I really use that to evaluate the critical thought that's going into the writing. If there is a situation where you know, something gets flagged and the student is like, oh, no.

No. No. I I definitely wrote that. I that was not AI. Ask them.

Okay. Great. Tell me how how you thought about that. Just explain to me how that came together. I'd love to hear your thought process.

I'd love to, you know, let's chat a little bit more about it. It's a place for you to have that formative discussion and not a punitive punishment. Yeah. The system where I use my own writing. And actually can send my own writing as AI.

Oh, yeah. Okay. I mean, that's gonna happen. Right? So this technology is changing all the time. The models are constantly getting refined.

We there's a lot of science that goes way above my head about the number of of words and the, you know, the increased level of confidence depending on how many words and placement with each other. So we can talk more about that. And if you have examples, we would love to use them because they'd be great to continue to train on. But yeah, you know, we're nine months in. So we're trying to figure this out, and and the landscape's gonna change.

But, yeah, it it will happen, which is why, we have this little document here, discussion starters for tough conversations. So how we're all kinda learning how tech how this works, how to how to do it. But I think, you know, it's really just about getting to that next level, being able to inject. Gonna get exciting. Being able to inject student voice, this is where my disco ball comes down, and and and just understanding how is the learning happening, you know, because that's ultimately what we need to be able to quantify and understand as educators.

And, you know, if you can get that from from an essay, you know, last year, you wanna be you the game has changed this year. We gotta we gotta look at it a different way. So we do have a bunch of resources for you, which I'll have as, a QR code for you at the end that you would link you to all these things. But here we have, you know, how to update some guidelines or suggestions on how to update, your academic integrity policies to include AI. Approaching a student regarding a potential AI misuse issue, or discussions for starting difficult conversations about AI.

As well as a lot more educator resources, how to use it in your classroom, how to do all kinds of things with AI in your assess So check out the link which I'll give you in a minute. But, you know, we have we are very much thinking about this, writing about it, talking to you about it, and going to be continuing to focus on this to make it, you know, easier for you as we go into this next school year. And who knows where we'll be next year at this time? Like, let's figure cross. Nothing crazy happens. It should be good.

So I wanna leave you with this last reminder. I do really love this image. I think it is just so indicative of of, you know, how we all feel at this moment in time. It's it's so exciting and joyful and full of opportunity, but also deeply frightening at moments. And I think we're hopeful hopefully gonna end up getting off the ride at the at the station and just be really glad and excited and have fun and wanna do it again.

And, you know, that's that's kind of my my hope for all of you. And I will leave you with that. Here's the resources, and we have time. I think for a few questions, but thank you so much. Yeah.

What are you, I guess, Yeah. Well, we have lived this reality. We were all we were, you know, modeled on three point five, ready to go three point five. And then we had the target date of release of April fourth. And, like, I think, like, March twenty ninth, they released four.

And we were like, no. So, you know, we don't have insight into when these are gonna, you know, cycle up. But we do have a commitment to continue. And I think what's gonna be most important for us as we continue to bring in and adopt new large language models, but also to apply the educator lens on that in these in these student paper lenses on that. And as we train the model, the new model, whatever that turns out to be.

And and I don't think we can give any timelines for that because it's gonna be a work in progress, but I think there is a commitment in order to stay current. We don't wanna stay on legacy three point five for forever, but we do need to make sure that we are understanding what's happening as we bring in, you know, as we commit to additional investment in, you know, keeping up with the Joneses. Yeah. Mhmm. Yes.

Said, what would be your suggestion? There's a whole blog post on this that has a lot of detail. And I'm gonna send it to you because I think that's probably the better reference point. But, yeah, it's really all comes down to, like, the proximity to AI text and versus not versus how long the the the text block is and that sort of stuff. Other questions? Alright. Well, thank you so much. I think we're pretty much at time. And, come see us at the booth if you have more questions.
Collapse

Discover More Topics: