Managing AI-Detection at Scale: South Piedmont Community College Leverages GPTZero and K16 Solutions To Obtain a Full View of AI-Usage in Student Submissions

Share

"With the rapid evolution of AI tools disrupting higher education, many institutions are left wondering how to implement standards and guidelines for students on its usage. South Piedmont Community College (SPCC) was no different–they were seeking bold and innovative solutions to help guide them through this new-found challenge. 
By working with K16 Solutions, an EdTech data company that harnesses GPTZero’s AI detection software, SPCC has been able to scale AI detection across its entire Canvas LMS. By combing through thousands of student submissions at once–from quizzes and online discussion posts to essays and more–SPCC was able to identify AI-generated content and even zero in on the specific submission and text flagged as potentially AI-generated. 
With this holistic view now in place, SPCC began rolling out AI detection capabilities to its faculty so they could view AI-generated content in their classrooms and develop their own academic standards for student usage of AI. 
In this session, you can expect to learn more about how SPCC:
Scaled GPTZero’s AI-Detection across thousands of student submissions at once
Established reporting to view potentially AI-generated content detected by GPTZero
Laid the groundwork for developing standards and guidelines around AI usage
Worked with faculty and administrators to implement this solution directly in their LMS
"

Share
Video Transcript
Welcome to the session. Thank you for showing up. Good afternoon. We have I think a a really interesting topic to cover today, which some people are billing as maybe, you know, one of the biggest problems per se in education. I don't think it has to be. And I think that's that's gonna be one of the ultimate takeaways from today is why this does not have to be a problem and how you can embrace what is happening in education with AI.

So, couple of quick introductions before we before we start, and these two will introduce themselves in far more detail when they come up, but we've got Edward Tian from GPT zero, who is a wonderful partner at K sixteen Solutions. We've really brought our technology together to really help solve this this problem. We we saw, I think a very simple answer to this, together collectively and it's been a fun ride with Edward. So and Catalina from South Piedmont Community College. She is really the star of the show.

She is going to take you, I think on the journey that her institution went through, to solve this problem and really understand what was happening in her educational ecosystem and and really where do they go from here. So think that'll be one of the major takeaways. If you wanna hit the slide. Alright. So the outcomes from today, the things that we Hope everyone in here will walk away with.

One, you'll be able to gauge the use of AI chatbots in your institution based on the experience that Catalina will share. So that's number one, two, plan the strategy to tackle and embrace the AI tools. I think the embrace being the key word, right? This is not going anywhere. It's not gonna go away. My father-in-law thought the internet was a fad like twenty years ago.

It's not. Okay. Next slide, yes. He probably still does. Identify how this AI detector can support the healthy use of AI in in assignments and submission.

So that's those are the outcomes we're looking for from today. Okay? Next slide. Alright. What we'll cover? So the influx of AI generated assignments at SBCC in the spring of twenty twenty three. Obviously, they saw an explosion like I'm sure you all did.

How did they respond? The efforts to identify and document that that plagiarism and assignments, the test of the scaffold AI detection technology So they've they tested this before really embracing it. And then how you can leverage a constructive use of AI in higher education. So with that said, I'm gonna give it to the star Catalina. Thank you. You got it.

I'll get out of your way. Thank you. Thank you, everyone, for being here in this presentation. I don't think I need the mic either, but, Thank you for being here. I'm the director of elearning, South Piedmont Community College.

I've been involved in elearning and digital education since I was scrolling since the beginning of it truly. Wait. Since the beginning of, Blackboard, course info. Since it was course info year two thousand. So I've been here for a while and I had had the opportunity to see a lot of things coming and going.

And yes, this is one that is here to stay. So we're ready to embrace it. And let's see. Yes, please. So all started.

We've kind of coexisted with with AI for a while. Right? AI is an overused word lately. For everything. It's AI, AI, and AI. We've been using AI since forever, and you you all have been using, I'm sure Google, Siri, Alexa, I love my Siri Siri called Siri Du Siri Text.

At least I do. And the other one that I love very much is, Grammarly. Grammarly corrects me all the time, although we have serious differences. I she doesn't like my passive voice and I do like my passive voice. Especially for reports.

Right? So I need to so that that to say there is always a human brain and being behind it. That needs to analyze what's going on. Now, the explosion started in November, this just November thirtieth twenty twenty two. When Open AI just unleashed the chatbot, right? And it's it's a chatbot. It's chat generative pre trained transformer, chat Jpt.

That's when all started to kind of get fussy. Why? Because it totally exploded and the students use it just right away. They were the first user. This is clicking But anyway, one hundred million monthly in two months users and I have my my Sorry. I wanna come back here.

Wait. No. No. No. I just wanna get ticked off just to give you the references on how quick this this application grow TikTok in to nine months to get to the one hundred, right, to the one hundred thousand.

An Instagram two years and a half. So that's that's kind of how rapidly it grows. I wanna ask everyone. Oh, I wanted to ask another question too. One, how many of you are instructors or teachers? Good.

How many of you are LMS administrators? Whoa. And how many of you are administrators, education administrators. This is for all of you and for all of us here because I'm in that boat too. So how many of you open a chat GPU account? Yay. How many of you open it? Before before the deadline before January twenty twenty three.

ID too. So we're part of that one hundred thousand. Right? So anyway, all to say it totally exploded next please. So we can see how how we started to struggle because the students were the first one to find out, Hey, there is something really good here. If I'm late in my assignment, I don't need to spend four days working on my essay.

I just type it in. It spits it out. In seconds literally. Who cares? Copy paste. It's nice and pretty.

And that's the thing with, chat GPT or chatbots in general, their program, and and you correct me here, I am with the expert, their program to kind of be very use very fancy and organic words. Kind of like what happens when you're writing an email or a text and comes with options perhaps it's not the option that you want, but it's there and it's nice and pretty very eloquent language, but is it true always? They call it hallucinations, right? Their errors, their machines. But anyway, we had one hundred and three letters, disciplinary letters, and this is in South Pittman Community College. Our FTE's two thousand students is not that big. So that poor Dean, we're gonna see what happens.

She was busy at the end of spring twenty twenty three. Next week. Oh my god. What? Maquina help me out? Ten? Maybe five? So that was that was so we're gonna hear a video from, and I please. And I would need to do something.

We're gonna actually hear the the testimony from the dean who has to write all those letters. Yeah. There might be some foul language in this morning. My area offers the majority of the general education courses at the college, so there are a large number of courses with written assignments. In my role, I determine sanctions for academic integrity violations that occur in the school of arts and sciences.

And I have been very busy in the last three months. In April, an AI detector became available through Turnitin, and instructors began to monitor student submissions for the use of Within six weeks, we identified over a hundred cases of students using AI to complete assignments in Canvas that had not been approved nor expressly permitted by instructors. It's astonishing how quickly AI became a tool that students rely on to complete coursework once chat GPT became widely available. As you may know, Turnitin is able to detect plagiarism for work submitted through the assignment feature. However, written s days in the quiz feature or discussion forum responses are not monitored by Turnitin.

So I suspect that AI may have been used in those kinds of assignments as well. What we found was likely just the tip of the iceberg. It's sobering, especially giving that AI will continue to evolve, and detectors will need to evolve at the same pace. And some students will continue to rely on this technology to complete coursework even if it and academic integrity charges. There's also growing concern that current AI detectors may be unreliable and generate false positives that a high rate.

I'm not sure what the answer is to this, but it's necessary that we find a way to reliably detect AI, explore how teach students to use it responsibly and find the means to educate our faculty on when and how ai might be used response in the classroom. At this moment, this is one of the most pressing challenges that we are seeing in higher education. You guys wanna watch it again? It's it's worth it. Oh, no. Thank you.

Yep. Next one. Thank you. So how, did CPCC responded? At the macro level, SOPPitman is embracing the use of AI generative chatbots. It's been really hard and you as educators, we are still working on this.

And we're by no means experts but we're trying to get ahead of the game and we're trying to figure it out with that. Our mission and goal is not to totally training is to find a use of an AI that is really gonna that will really boost the capacities, the human capacities, the student capacities, the faculty capacity, instructional designers' capacities, and to spark creativity not to just use it for the function of copy paste like do my work. That's that's not the intention. So with that in mind, next please, Right now, we created we use simple syllabus, by the way, if if simple syllabus is here, is amazing. And what we could do is before the spring, the summer semester started, we have these two disclaimers in which we are saying like for cheating or plagiarism since never, right? Copy paste and use material as your own is accepted.

So if you're gonna be using that, you need to be one authorized by the instructor and then just at least say it. Right? Let it know. And also as instructional designers, when we use it, we let dismiss know. You know, we're using this, but this is generated by an AI. You please evaluate if it's gonna work for you or not.

So, next please. So faculty and administrators were trapped literally. And this is at the end. Oh, this is at the end of the spring semester. Totally trapped in trying to identify okay.

Is this ai? Because yeah, we got we got turned it in fine. It was turned automatically. We're we were fine with that. But it doesn't really say that it's there. You have to dig into the assignment.

The assignment is not flagged like the other assignments. You just turn it in. You have to dig into it, oops, and then you find out that yes. So mainly kind of faculty and Deans, the instructors, especially That is kind of suspicious because you start seeing notes every sentence. I mean, a comma.

Very short sentence. It's very articulated you notice that there is a sentence that is very much like what the instructor asking the discussion board or asking the assignments and then something super eloquent. Full of commas or short sentences. So that's when these instructors were flagged. So they were asked to go to GPT zero copy paste, evaluate there, kind of do another research.

So it was very time consuming. Not only for the instructor, but imagine the Dean. A hundred and three times she had to go pair a student, find the pattern in other assignments that the student may have submitted through the LMS, that was that was very time consuming. It has been a very time consuming. So, yep.

Use of external resources, chat to be yours next, please. And then, then I'm gonna talk about the case how did we get to K sixteen? We started with with K sixteen a partnership when we migrated in twenty twenty two. It sounds like so far ago. It was only really a year a year ago, and we migrated everything really quick. We migrated over four hundred courses, in in a year.

And so we partner with K sixteen to move all those courses from Moodle into Canvas into a very specific template. It worked great and it saved us a lot of time and it saved us a lot of we call instructional designers and course course builders. The calls, the course builders time that we save with this was amazing. So we already had a very positive experience we were struggling with the thing with the AI chat box, assignments, then we touch base with Sam on the back who's been super instrumental And and and then that's how we started to test, this technology. Next, please.

Next time and then we started to test the scaffold AI detection. When I when we saw it the first time it was Misty who's not here with me is the instructional designer who's a superstar and he's like, oh my god. Sam gave us a test, a view of what this was about. This is were very pleased with what I saw because we could see all the steps that the Deans and the chairs had to go through to be able to find that information that it was so easily fine here. So we, but it was a yes.

When can we have it? Absolutely. The installation happened within an hour to max. I mean, we took longer just to say yes, we can because this is still a commitment that we we It's the entire college who has to kind of go on board and we started this summer. So I I was very conservative at the beginning and we just said, okay, we're just gonna start with Diane, who is the the poor, most affected Dean and she was the first thing that was there. So then we I started to see the progression and then our our summer semester started May twenty second.

A week after, like, okay. We have, nursing courses who are accreditations and things like that. That are are being, affected. Right? And I decided to start. Okay.

Talk to the deans. Okay. This is going on. We're testing this. I call them and show I gave them a demo.

I gave them a little training is very intuitive is is and and I'll explain in a minute. And then that's how we started. By the deans. Right now, we have only a few courses and we're gonna talk about this. So this, this is Kaffo AI detection has two components.

One is this scaffold which is kind of a a very similar environment to Canvas and it has the or the component inside the course. The LTI integration within the course. One of the beauties for us especially testing is that we could select which courses we could do because it was a test We were not really allowed because the students were not notified either that they would be judged based on this AI detection. So we didn't really launch it in full, but yes, some of the nursing instructors said I want it and I want it now. So a couple of nursing courses were didn't all the LTI right there.

So that's how we are and I think next we're gonna come to is it the demo? Yeah. Next. And then, get ready. The fun part. The fun part.

So my glasses. Yeah. That's great. Yeah. And I need my glasses.

So we the don't worry. No ferpa. I know. No ferpa. We identified the students and, there were some, pseudonymous created there with, with the magic of case sixteen, this happened.

So, no worry. This is not re they're they're not real students, though they're real courses and that information really happened. And we wanted to share this real thing with you so you have a a a real sense of what's going on. So this is the Not well, they they could be real names, but not real students. They were made up.

Wonderful. So she is very much like, like canvas. So this is the scaffold. This is not the integration. This is the scaffold in the scaffold, area, and it's loading.

We were told we're gonna have weak internet today. So we'll hope for the best. Alrighty. Where is my Am I log in? Alrighty. So here, and I I want to see a little more.

So I love this. And I am sorry. And I I'm not my mic. So, here is what happens here. So the bar on the top you can you can go like back in a year.

Is it a year? Yes. So, but with the little thing here, Well, thank you here. You can actually, then I can try again. Do you want so here, with this date here, that's how I kind of go to the semester. But that's when we tested the LTI integration with semesters with full courses.

So we wouldn't we would know how it works. So the first part of this test was as in e learning ITS and Diane, Doctor. Paige to kind of get familiar and tell you how this works. What kind of data we need? How can we make this data useful for us and for your instructors who are gonna be receiving that information from the students. So they're not gonna have the scaffold instructors, but yes, the incident administrator.

So, the first thing we could do here, what I did is I wanna see the going semester. How is that going? So I went all the way to May, May twenty second, and then it kind of changed I think, nice. Students are not longer, using AI or cheating or plagiarizing. The semester ended. That's why.

That's why. So anyways, so this is how it works. So I can see here the top ten the top ten more, fun users of the AI. Or top ten perpetrators right here, top ten students, and top ten courses. So I'm gonna get to one of the courses for instance.

So this is still the scaffold. This is what we as administrators. This is the big picture of the entire environment. They put us in the basement with no internet. In the meantime, I can have water.

Yeah. Usually when we start this project. Okay. So once you get to the to the main courses that have been use with AI the most, then I can see exactly the use and the users. Again, these are not real students.

But I can see pretty much what had happened on the environment. Thank you. See, I can see where exactly it is, what assignment if it was a discussion, which we didn't have discussions internally. So for us having this is like, okay, this is great. And so we know that it's been in discussions.

We know the this particular discussion, and we know the person. And this is the part that is, to me, sad. I mean, again, Sylvia doesn't exist. Maybe it exists somewhere. But this Sylvia here does not exist and was not in that class but it was a real student with a real name.

Look at that student. She used the AI in every discussion in every module. And the instructor, what I think is so sad as the instructor, then you're grading chatbot, a chatbot. Don't you feel like I wasted my time here. But anyway, that's how it went.

So you can do a pattern by student or by class. So we we click here the scaffold and I can see the details and this is what instructor will see. When I click the scaffold is when the instructors will see the, AI detection. This is exactly the same view this instructors will have. And then I can see again, I love this one.

And then you can see his ace AI only mixed AI only and you can sort it the way you want it. Right? And then I can go back here. I'm gonna sort by the majority by classification. I like the naming. Can I say that is new? I like the new naming because it's it's like it it it makes sense.

AI only and it gives you ninety five percent. So look what happens when I click here All they highlighted is what the student use with the copy paste. Again, I would recommend if I'm the instructor, well, you need to have a dialogue with the student. You can't take anything for granted. I mean, real.

Right? You cannot see you are not gonna write that student down right away. Right? You're gonna keep track of what the student has done. We if it makes sense. Look at the prior assignments of that student, see if it's consistent with the writing and with the style. Right? But now you're flagged, right? And the part, what, this is not we didn't use it for the for the spring semester.

But during the spring semester, what Diane was saying Doctor Paige. She called as soon as she said, well, yeah, I was late for the assignment. The assignment is due eleven eleven fifty five at night. Of course, the assignment was submitted with a chatbot at ten PM. So, and the student openly and flatly said, yeah, I use it.

So, well, there wasn't much to discuss, but I think this to me is all about transparency. And we're gonna talk about a healthy use of the AI in a minute. But this is where it starts, is the healthy use, is the honesty and is the transparency. So from here, I would say just go back to my to the other to the course. I'm gonna go back to the AI detection, actually.

As it goes to the AI adaptation, I can explain the rest. Because what it has is we have all the students. So what I would do, the name of the student was, Williams. What was it? Sylvia Williams. So I would go to students.

Can you click on the students, please? Here. Sorry. View old students. View old students, and I this this part is for Dens or administrators who are looking for data. And even if the instructor asked for this data, again, we haven't tested life with all the instructors.

But I'm sure we're gonna get, and then you can, you have to unselect all the students. Edit that. Sorry technology. Right? Yeah. There are many Sylvia in this course.

So you go to Sylvia Williams So you notice the pattern for Sylvia. So, and then if she would have used the AI for different courses than it would have been listed there. So it's gonna be because we saw it this time. So it was for, a music class, this class or the nursing class, So if all the courses for Sylvia would be listed there, so you can sort by the user or you can sort by the class. So the first thing the, nursing, the associate dean did was precisely go to all the nursing courses that are accreditations and see exactly the use of AI there.

So anyway, this is pretty much a quick overview of the scaffold AI detection. I'm sorry it took us all the way here. We'll get there It's good because repetition is the key of learning the material. Right? Yes. We're all educators, so we know that.

So we were here. We continued the demonstration. Now, this is what I wanted to show. The this is how it looks like in the course environment. We see the AI detection right there in the navigation.

All that initially when we deploy it, which is gonna be for this fall semester, it's gonna be in the hidden part. So I'm doing a tutorial as soon as I get there and then they need to bring it up. Is that is the team's decisions on how they're gonna use it and how they're gonna deploy it. It's gonna be deployed in every course. But then it's on the instructor or the DNs to get it back to where it's visible so they can use it and see the screens are just the same as we saw before.

So that's how it looks like in the course environment. Next, please. Findings from the, scaffoya detection. Yes. You could see that we could evaluate the data and the use of a chatbot.

Because now we say AI can be everything and AI is is things that we don't have to. But the chatbot material, from the macro to the micro you could see at the school level. Did you see the curves? How it's being used? That for us was very meaningful. AI uses a resource to build up upon. Honestly, we didn't see that, unfortunately.

And what we saw was a short cut to complete assignments super quick. It was just a copy paste thing. And again, that was that that was our the result of our summer testing, which is a bit sad or very sad. And I wanna have my cheat sheets. Sorry here.

So, we have this. This is an educator. I'm sorry. I'm gonna get to here. Her name is Christina Weitman.

She is a writer for wired. I've been a wire fan, the wire magazine that now has a podcast. Can anyone tell me if you have her, have a nice future, the podcast? Yes. One. The only person.

Anyway, it's a podcast and that's how I get to get a lot of this information that I enjoy. But anyway, She is a writer for a contributed writer for, wired. And when I got into her article, you can sympathize with that. This is perhaps the most important. I'll need to get smarter about how to ethically incorporate AI into my teaching.

As I was saying as a baseline, what baseline, what we need to have is transparency. If you are going to use it, how are you gonna use it? Faculty response is responsible to provide the guidelines and clarity on expectations. What are the students supposed to be doing? That's why and this is kind of a a summary of what you saw that we have in our syllabus. That's pretty much the same. But again, what I'm trying to say is we want our students to really build upon, not just to copy paste.

They have to be the humans and the students should be the center of our research, of our use of a chatbot, not just not just the the task that you want them to accomplish. If the human, okay, how can I do this better? I think if I would do I have an taught in the last semesters, but if I do my first assignment would be no matter what class is, show me how can you smartly use a chatbot to create an assignments that you can build upon and that's when it's important. So we're not banning it. We're just trying to figure it out a smart use of the chatbots and AI for the college. Next.

Oh, okay. So I have two slides. Gonna talk about the why and then the technology. But to start with the why, I wanna touch a bit about why we started, why we are working with educators and why we are here today with all of you. To start a little bit of story, so I graduated with a degree in computer science and journalism for computer science the last two years, I've been working with Princeton Natural Language Processing Lab on AI research and the last year with my thesis advisor, Garthick Nassimmon, who, himself is actually one of the original authors of the GPT paper that Catalina mentioned when he was at Open Eye, we've been looking into AI detection.

And, that's on the research side. On the journalism side, I am a writer I've been writing for my student paper for the last four years. I took some time off to write I I love writing. And, the most transformational class I've ever taken was actually a writing class with Professor John McPhee, who is this legendary New Yorker writer who's taught the same writing class for the last forty five years. And he's, he's just cut it all.

And I just remember being enamored and, you know, sitting next to me, him writing an essay and having him give critical feedback and edits and showing me how to write better because, maybe I was coming as the only computer science student in that class was not the best writer you can tell. I think there was a loss in terms of what we felt in December of last year and in January. In terms of a loss in the educators, we've heard a lot of educators say, Hey, I feel it feels really cheap giving feedback to something that was created by an AI. I'm spending so much effort, giving this learning. And on this, on the student's side, it's very similar because Catalina's really touched upon.

We learned from getting the feedback. We learn from doing things differently, doing things and incorporating things with, whether it's new technologies, writing things differently and adding the human element, and I definitely learned to write beautifully in that class, and there's a loss of that And, unfortunately, because of this divide between student teacher that AI has generated, a lot of we've we've heard teachers have told us we're retiring to because we don't wanna deal with this, this divide. And, John kind of retired after Tier two. He's ninety two. It's, it's about time.

He started teaching it as well. After the year you taught, that writing class. And I think our motivation here is not to come in as a plagiarism tool and embed this into the AI space. But instead to come in with a perspective that AI is here, we always wanna be embracing these new technologies But how can we transparently bridge this gap between teacher and student? And that's reflective in the work we're doing with K sixteen and and and everything we're doing since January, of this year. We built out new features where students can write transparently with AI where they put in a text and add the human elements and share it as reports with teachers.

We've really worked on this AI detection feature, with K sync sixteen to be very holistic. Get the understanding of the big picture at the institutional level as an administrator, what's happening before going to the teacher level before going to the student level and having the safeguards for students to prove, through some of these features that, I wrote it as a human, or disclose transparent transparently, I wrote it with AI. So, initial motivation is there. I mean, yes, Oh, sorry. There's some others.

Yeah. Initially, when we launched in January, as sort of this, this kind of the impetus to take the technology I was working on and just put it out there. We had over one point two million registered accounts. The majority of West were, were teachers and educators. And, I guess the last thing I would mention is a lot has changed since January, started as me and one other researcher he was doing is PHH in sort of AI detection as well.

And, since then, we've grown into probably the first research team just dedicated to space because everybody else is, we'll put these generative AI technologies, and there's kind of the detection is enough your thought. And so we managed to build a team, and we're twelve now, and we're working with K sixteen. We have folks in the room, and, we've got folks, basically, from Princeton's NLPLab, Caltech, the University of Toronto, the Mila AI Institute in Montreal. Where some of the godfathers of AI were were and and it's it's been definitely a journey to take it in this phase. Yep.

There you go. There you go. That one. Great. I'll say a few things on technology.

So the big takeaway and, a little bit of, like, history, but it's really like the last few months we've been living in. So I can't really call history of how technology has evolved as AI detection for GP Zero, is completely different in January, and in July. So in January, how AI detection as a space looked like was there was a lot of these academic papers using statistical methods for detection. In terms of analyzing writing patterns. And we were, part of that.

We kind of like introduced to the lexicon of the detection research based ideas of Barsenas and perplexity, which means that human writing wrote with a lot of variation compared to machine writing, which was very consistent over time. And with more and more text, we could pick up on these patterns. And this is sort of the academic field. How it's kind of evolved since then is you have a lot of players putting out AI detectors based on the statistical methods, and, just launching them. And, and, and that's, that's kind of, a field of no one's was really doing their own research in this space, and everyone was leveraging what academia was doing on the side of statistical methods and plug and play.

So how that has evolved in the last six months for us, as a research team is we've we still have that layer kind of bursiness statistical methods as a first layer of analysis, but that's not enough. And we've added a lot of different notes. So we worked with the K sixteen team to add an education model in terms of we have, basically, data from, K twelve, University, English foreign language data to have a module specific for that. And, kind of a main approach and shift for us is now a deep learning approach where we have in our team in the last three months or so, basically adopted an in house language model, like chat GPT, a very similar architecture. And we, you know, take a piece of text, and you can almost like asking the machine how likely it is to generate the same piece of tax really by probability.

And if the if this in house language models very likely generate the same piece of text. It's very likely to be AI generated. So this is very novel and kind of a new architecture that no one else has been able and is doing the space and has allowed us to improve as an AI detection model rather than staying on the statistics whole layer that people are kind of taking the papers that were out at, you know, the end of twenty twenty two and plugging it in. And when we talk about accuracy, I think there's two things that's important is how do we measure it? When we see something like ninety nine point nine percent accuracy and detection, that number is meaningless because everyone can achieve that when you're measuring in on a very simple data set of student teacher. And for us, there's two key takeaways, is one we've, started with Education O to get really, really challenging data.

And we've collected four thousand, articles of challenging student writing and things that have really stumped detection, including GPD four writing, and then we've basically taken our deep learning approach to improve detection in the last three months to, achieve accuracy we're talking about here. And the second, really key takeaway is we have the capability of changing the threshold for prediction of predicting AI as AI and human human to be more conservative for, predicting human and human. And then a little less conservative than AI and AI and really giving people the ability to sort of tune that threshold for the education use case, which is also something different from any other player in this space. And then taking those two factors in this four hundred thousand data set last three months with a new deep learning approach, this this initially stumped all the detectors. They were not doing their purported whatever accuracies.

And we managed to show this improvement capability to be less than one percent in terms of false negatives and then, how we measure accuracy is with the threshold in terms of a ninety percent in in AI as AI. And a less than one percent in false, stakes of detecting human as human. And I think a really important takeaway for us is being a research driven team that's native in this space to constantly improve detection as it evolves because you actually need to do you can't take the papers from twenty twenty and think I'll just gonna be working, in in the next six months anymore. Partnerships. Steve's gonna take this one, but the one thing I wanna mention is, we were we were, doing this in a whole host on masses.

And with Canvas, the stuff that K sixteen has been able to do and go into quizzes, not just assignments, but really, this whole, the whole reading, like, the platform has just been incredible and just it just made sense for us to to work with the this partnership. Thanks, Edward. So we are unfortunately out of time. I'm gonna wrap us up though. A couple of quick key points, and then share that we are gonna be back at the booth.

There's gonna be plenty of opportunities for questions as well. But I think the last piece to Edward's point his technology is incredible in our ability to grab everything. Everything that you could possibly imagine that you would want to run through this process will be done without you having to do anything. Alright. We're gonna grab everything, every discussion thread, every quiz, every test, every essay, every paper, it doesn't really matter.

Alright. We're gonna get it all. So with that said, again, we'll be at the booth for follow-up questions. We can do personalized demos as much as you would like when you get back to your institutions. If you've got other folks here that wanna see this, please reach out.

We're happy to do that. We've got a webinar coming up through the chronicle of higher education September fourteenth. September fourteenth. So feel free to register for that as well. Really appreciate everybody's time.

Again, if there's any questions, please join us at the booth. We'll be right outside as well. Go ahead real quick. Actually, why don't we just do the questions until somebody kicks us out? Yep. The the learning behavior here of the hop in case.

We're addressing right now what happens after that fact, but I think as good instruction designer, as an instructional designer in my background, if we change that behavior for being a Q and A modality, which is they have a problem giving me a response to it. Ask the student to do a matrix, and that's more challenging for an AI generate it to do that than just do a q and a chatbot features or perhaps instead of in a discussion forum instead of doing a a q and a again, I just do a recording self-service. Do it an answer. That's more more challenging to do. So you you think a new box is rather than thinking the process where the AI watch work.

Because you're you're stopping to try to stop the tide from coming in the way. And that's more challenging to do that. If you take a little boxes, then you'll take away the problem from where that problem should not be doing. Absolutely. If I may may Yeah.

Read the conclusion, which is not conclusion is exactly what you're saying. Now it's on us as educator to come up with those solutions that you're talking about. How are we gonna make it? So it's not a copy paste. We have to think outside of the standards on how we've designed assignments. Use problem based learning.

Some other strategies that our students are not just gonna copy paste from a chat bot. Yeah. We are the K sixteen booth is front and center. As you walk in, right past the food, cannot miss it. We will be there taking questions.

To a demos, whatever you need. Yeah. We're heading over there now. So feel free to join us over there. Thank you.
Collapse

Discover More Topics: