We Have a New LMS, Now What? Discovering and Empowering User-Focused Operations
Join us as we discuss how UCLA developed an operating model focusing on product management and operational efficiency. We will also explore how your institution can adopt user experience strategies to create transparent governance that informs the continuous evolution of your LMS and the faculty, student, and TA experience.
Hello, everyone. Welcome. Think we're gonna get started. So thank you for joining us. I know there's a lot of sessions, during this time. We hope this is really informative.
I think the slides will be available afterwards, but the way that we've, constructed, This presentation is to be a framework and templates and things that you can take back to your home institutions to either modify them or apply them, the way you see fit. So The title today is we have a new LMS now what? Discovering and empowering user focused operations, UCLA's LMS implementation journey. So first, we're gonna start with some introductions. I will start with myself. Hi, I'm Alana internato.
I am the director of our Brew and Learn Center of Excellence. Center of Excellence is something at UCLA that we consider both an IT service provider. So central within the ITS organization, but also a strategic enabler working very closely with our teaching and learning, constituents, thinking about best practices, thinking about how we get involved in the community, not only within UCLA, but the UC system. I joined in the fall of twenty twenty two right when this project was, shifting from implementation to sustain state So I'm gonna touch a lot about that sustained operations, and then my colleagues here are gonna talk a little bit about the implementation. Hi, everyone.
I'm Carly Hillman. I was part of the UCLA implementation team as the program director. In my day to day, I'm a senior manager with Deloitte as part of our higher education practice. I've been with Deloitte for over ten years, working with higher education and other public sector clients. I also lead our LMS capability as part of our higher education practice.
So, very much embedded in this space. And like Alana said, involved with the UCLA program was the per director. So I'm gonna be speaking a little bit more about that, to set some context for our journey here today. And lastly, I am Sally Maholski. I served, on the Deloitte side of the project as well.
I was the LMS subject matter expert on the UCLA implementation project. I worked cross functionally across all of the work streams, that were represented during the implementation. So that includes organizational change management, academic services as well as the IT, space for the project. I've over ten years of experience in LMS and academic technology. I've worked across the K twenty space.
And I'm really excited to bring some of that expertise to the table to support both Deloitte and UCLA. Alright. So here's our agenda. I'm gonna calibrate you for a moment here. We're gonna walk through four areas.
And I heard I'm gonna try something. I've heard instructurecon likes a theme. So I'm gonna introduce this agenda in a little bit of a theme y way. So if you like it, laugh. If not, I'll be embarrassed for the rest of the day.
It's fine. So our voyage will take us across the implementation terrain of UCLA's new LMS system. We'll learn about the strategies used, the storms we weathered, and the triumphs that we accomplished here. We'll then set sail on the sea of transparency. We'll dive deep into how engagement with stakeholders can inform and guide our evolution journey of our learning management system, and it's just that.
I think everyone here can agree. It's a constant change in evolution. Our third stop will bring us to the shores of operational excellence. Here, we will discover how we construct an operating model that intertwines product management and user experience. And of course, what's a journey without a chance to explore.
We'll drop anchor and we'll have a Q and A session for you at the end of this. Thank you. Thank you. Thank you. Thank you.
Some of the key takeaways here, considerations for developing an operating model focused on product management and operational efficiency, And then some best practices, for establishing transparent governance. I know governance can sometimes be a bad word, has bad connotations. We're gonna talk a little bit about what we're doing at UCLA, that informs continuous evolution of your learning management system. So now I'm gonna hand it over, to Carly. Alright.
So we're just gonna start with a little bit of context around what was the LMS implementation at UCLA, what was the scope of this, what was the structure of team and how did we approach it. And then we'll get into a little bit more on operations like Alana beautifully outlined. So just to start us off, the LMS implementation, it took place over the course of primarily over one academic year. There are a number of metrics and kind of key things to highlight here on the right hand side of the slide, or left hand side, I guess, if you're looking at it. But we did have two pilots.
We did have go lives sequentially across, the different academic quarters. And this did cover the scope of the entire UCLA campus, moving off of Moodle, and onto campus. Just to provide a little bit more context, This wasn't just about moving off of Moodle, which is a very, custom, homegrown type of platform onto a SAS platform, which we all know as Canvas. It was also about moving from a distributed service model onto a more campus unified approach. And so we really wanted to focus when we were building this implementation team on bringing in subject matter experts, and other campus leaders to be part of the program team.
So what you see on the other part portion of the slide here are our key work streams, which really enabled cross collaboration across and throughout the implementation. Starting with academic on the bottom left there, we brought in a team of teaching and learning specialists instructional designers, and others throughout the campus who had had previous roles or had those skill sets specifically in the academic focus, teaching and learning space. Just to support, to support the foundational components of the LMS. Then we had app app developments. This was our IT experts focusing on the integrations, making sure we had all the back end systems configured and, you know, supporting all the technical aspects that we all know need to happen as part of the configuration and implementation.
Then we also had a hyper care or post go live support team thinking about those aspects and where users would go for support ties into a lot of the work that Alana came in and led. But we had a team current that started with the implementation team to think about and develop that initial structure for where users would go for their support and have access to many, many resources If I keep going around the circle here, it's not really circle, but OCM, focused on all of our engagement all of our engagement with faculty, and other instructional staff, as well as our Dean's vice provost, a lot of leadership across campus to have that tailored approach. So We took a very white glove, approach when it came to communicating and engaging with all of our key stakeholders, and that was what this worksheet was truly focused on. And then we had a separate work stream on data and reporting, building out robust dashboards that helped with that engagement and tracking across the implementation. We built some dashboards specifically in Tableau and use those for our weekly reporting, which were really helpful for making sure we were tracking against our milestones.
And then if I skip ahead and and think about some of the lessons learned as we implemented the system, the LMS, and then transition to our operational phases. There were really kind of two key buckets that we looked at. One, around people focused communication, and all of the engagement that we had with our stakeholders. And then also anticipating timeline risks. For that first bucket, This kind of follows along with the work streams that I mentioned on the previous slide.
So it really making sure we had academic technical and administrative involvement throughout the entire implementation and it not just being seen as just a technical implementation and making sure we were involving all of those aspects across the the program. We also really tried to focus on, and I think we could you can always continue to do this more. It's not just a technical implementation. But we're trying to make changes from a teaching and learning perspective and really, enforce the transformation along with the support that I mentioned from moving from that more actualized model to the the unified support approach. Also always important to set those expectations for our end users around when those key points are happening across the program, like content migration, how to use the platform, key dates and other details to make sure everyone's aware what's going on with the transformation.
And then again involving university leadership was key for the success of this implementation. And then again, if I put my project manager hat on here thinking about all the timeline risks, we did follow closely with the academic calendar, but always keeping that in mind and not having go lives right around other key milestones, across the the academic calendar. We did have a couple of goal lives as you saw on the previous slide, but really taking advantage of proof of concepts and MVPs testing out l Is engaging focus groups and kind of baking that into your overall plan. Also being delivered about the end of the project and the beginning of sustained operations, So we kind of had this a little bit fluid for a while. And then, Wanna came in, and we were we had this clear timeline around transitioning to sustain operations, but super important for engaging with all of your your end users.
And then the last two bullets are really around consultants and contractors coming into support and what that transition plan will look like and thinking about that well in advance, as well as hiring for things like your long term sustained model and the timelines and how long that might take in anticipating contingency if you need to have other support in the interim while you're fully staffing up for that. Sally's gonna talk a little bit more about our engagement and some of the other expectations we set. Okay. Nobody's sleeping yet. Right? Everybody's still awake.
They're still following. Okay. Good. So You've heard centralized and decentralized a lot already in our presentation. It's a really important theme as to the way that we thought about the implementation of the system and also our sustained operations.
So I wanna highlight, a couple key areas that we were really thorough in thinking about with a special respect to the idea that the decentralized nature of the institution really lent itself to a lot of siloing. Right? Each academic unit is organized and managed independently at the start of this project. As part of that transition to a more centralized model, we're using the LMS as sort of that proof of concept to start to bring things together into more of a centralized model. So some of the things that we had to think about were roles and permissions, for example. Right? Canvas is very flexible about roles and permissions You can manage them, adjust them, delegate them.
You can have them at different subaccount levels. But when you think about the level flexibility and control that our academic units had prior to the implementation of Canvas, we wanted to help retain that sense of independence for those academic units while also maintaining a sort of standard, that was created by the implementation program. I think the same thing goes for the use of LTI tools, right, that is always a hot topic as part of implementations. When we started the program, we did, an inventory worry of all of the the LTI tools that existed across the institution. There were hundreds.
There was a lot of overlap. There were a lot of tools that had the same purpose. How many polling tools does one school need? And so as part of this exercise, I know there's some people who are thinking about that. As part of this exercise, we did a massive LTI evaluation project where we really looked at what are the best products for the institution And how can we expand their capabilities outside of just one academic unit and make them valuable for the entire enterprise? And so that was a really meaningful aspect of the project that we did. And we feel like that brought a lot of value, to the university.
Cis integrations and those dependencies, as well as how do we align the subaccount structure to the organizational structure of the university? It's really easy to put subaccount structure on paper, but when it doesn't align with your organizational structure or the way your cis is organized, You guys know what that looks like. It's super messy, and it's not great for data. So we really had to think critically about that. To give you perspective, UCLA has six hundred and nine sub accounts. Right? It's a lot of sub accounts.
I know that's horrifying to think about And I know there's some admins here because they're like And a hundred and seventeen customized SIS or AWS integration. So there's a lot there's a lot This is a really complex architecture. Now if you have a simple one that doesn't mean that this isn't going to apply to you, but it's just this idea of creating cohesion between like your SIS and your organizational structure, and the way that you organize Canvas around that. The last thing was prior to the implementation of Canvas, it wasn't made very obvious within the institution which tools and capabilities were meant to exist as part of the LMS. And which of those tools should and do live in other systems.
If that's the student portal, if that's the, you know, the faculty portal, if that's other spaces within the institution. So we wanted to make a really intentional effort to clearly message what capabilities should we expect and take advantage of within the LMS. And what areas should we look elsewhere for and and create those recommendations that people really felt confident they knew where to find the tools that they needed. On the other side, we're talking about the campus wide impact. There were a few things that came about, as really obvious to us that we had to focus on.
Organizational change management is key. Right? Too often these projects are treated as just IT projects and they are rolled out by IT and they are siloed and they do not include the thorough change management that helps us integrate our academic audiences, our community, and the other aspects of the project to ensure that it's really successful. We needed to make sure that there was clear ownership of that organizational change management and that it was clearly aligned to the goals of the project. We also, as Carly mentioned previously academic calendar, right? Like, do you want a go live right after Christmas? Because nobody's working right before that. So there's not gonna be any resources.
Don't ask me why I know that. And lastly, this really gave us the opportunity to refresh our, TPRM process, looking at things like security, accessibility, privacy. Right? What's the standard that we wanna use moving forward? Hint, UCLA is very rigorous. And we wanted to make sure that all of the LTI tools, all the integrations, all the configurations of the platform met that standard. Slide.
Okay. Shifting to operations. Okay. So as we're thinking about the end, the formal end of the implementation project and the transition to, like, what we're calling sustained operations or the, like, what comes next. We needed to think about some themes that would be applicable across centralized and decentralized, capabilities.
Right? So we kind of came up with these three pillars that we felt like were really represented across the board that we could use as a foundation to build out our strategy for our sustained operations. So we've got our technology expertise. Right? This is everything from infrastructure, custom app dev, compliance, all the favorite things. Analytics and data. Right? That's I actually really love analytics and data.
But thinking about learning comes, and advancement, and how we can take advantage of data to positively impact the institution. And then lastly, you know, we're trying to connect with peer institutions. What are what are peers doing? How can we take advantage of some of that? Community support was another one. This has many shape and forms. This is your FAQs.
This is your training. This is office hours. This is troubleshooting. Right? And this is really important to consider because every academic unit had their own community support model. How do you take that and make it really effective in a centralized manner? And then lastly, campus engagement.
I I truly think that this is one of the biggest reasons why this implementation was so successful was the really effective engagement of the community as part of that project. They were part of the solution they were advised. They were advising the project, whether it's on advisory committees, focus groups, student feedback, faculty feedback, This really drove the decisions that we made. It formed our governance for the project, and it ensured that we didn't run away with decisions that everything was thoroughly thought through. Alana's gonna talk a lot more about governance and decision making, and I am gonna start to turn this over to her.
Alright. So so governance and I mentioned at the top of this presentation that sometimes governance can have a reputation. And so what I wanna I wanna kinda highlight here is that governance is supposed to speed up decision making. It is not supposed to bottleneck it. That is that is the intent.
The intent is to think about resource allocation. It's to think about how decisions get made and alignment. And I wanna take you through this, this visual here. And talk through it a little bit from the bottom up, because I think it's important to think about how we scaffold this. So here we have at the bottom, we have something called the Brew and Learn Support Community.
This is open to all anybody on campus. It meets weekly. This is really those, front line soldiers day to day people dealing with distributed IT, faculty, students, etcetera. Sally had mentioned we have six hundred and nine sub accounts, about a hundred and fifty people with elevated permissions within Canvas. That means about a hundred and fifty people who are supporting the community besides our team.
So these are folks that we meet with weekly that they can, we can talk about priorities, think about documentation, a little bit more like a community of practice, a little bit more operational. We have the Brew and Learn Working Group. The Brew and Working Group is a formal governance body. There's about twenty members on it. We do have some of the distributed IT folks, but we also have the registrar.
We have accessibility folks. We have someone from student affairs. We have, instructors. We have some advisors. So this is a this is kind of a very diverse group.
That meets monthly. That's responsible for the direction of the Brew and Learn ecosystem. This group votes helps us prioritize, and then if needed escalates to the academic technology committee, the academic technology committee is a bunch of senior leaders, that are really more about, approving our recommendations, but most of the work happens at the Brew and Learn working group and the Brew and Land support community. Okay. So this is one of my favorite slides of the day.
And I hope I hope that this helps you, at your at your home institutions. Oftentimes with governance, everybody wants to be involved with decision making. Right? We there's twenty people in the Brooklyn working group I have probably weekly requests for people to join. It won't be fifty people, sixty people, seventy people. Everyone wants to be involved with decision making, but nobody actually wants to make decision or be a head honk over it.
And I see a lot of heads nodding. Right? So the way that we were thinking about approaching this, and this is something that we shared with the Brew and Learn working group and took us three months to align on. Right? But the way I want you to think about this year is decisions get happen and get made daily, weekly, monthly. Right? So on the left hand side here, characteristics or things that we think about when we're talking about making decisions. Right? Financial impact, effort, if there's custom development, service policy, etcetera.
The first column is the Brew and Learn Center of Excellence. That's my team. Right? We make decisions on a daily basis. If we had to bring everything to governance, it would really bottle neck progress and evolution. And so certain things that wouldn't be that disruptive within our purview.
We have the Brew and Lending Group where most of the decision making happens here. So make maybe it's a new LTI contractor vendor. If it's between fifty and a hundred and fifty thousand dollars, this group would make that recommendation and decide on it. And then the academic technology committee. And I'll let you look at this later and talk through it, and you can certainly ask questions at the end of this about it.
I'll give you an example of this in practice So Sally said, you know, how many polling tools does an institution actually need? We have a few at UCLA. One of those that we have at UCLAI clicker and it was a student paid model. So that means that if an instructor chose to use eye clicker, they'd put it on their syllabus, the students would have to buy a software license for their phone or mobile device or buy a clicker We shifted that to an institution paid model, essentially making it free for our students. And we grappled with, well, when do we when do we shift that model? So that if we do it in the spring term, fall students might say, well, I had to pay. Right? And so we really grappled with how do we do this? We brought it to this group They decided upon summer because it was the least disruptive, at least amount of enrollments.
It'll give us a test bed for when the fall happens and we have more students on campus. But that was kind of like our our test to see if this would work. And so I encourage you to think about this in your own institution. Alright. So now we're gonna talk a little bit more about product.
And what I wanna introduce here is some key operational questions. So these are questions that were asked when we were building the sustain model, also, I ask them to myself almost on a monthly basis. So things that you should be thinking about. Who's responsible for what? What will get done and where? Which environments, applications, and integrations that enable teaching and learning and automate processes will we utilize? We have decisions every day. There's ten tools that can do the same thing, give or take.
What will be reported and how, and how it will be overseen. So, again, I encourage you to think about how you you use this on your campus. So we're gonna talk about the first question, which is who is responsible for what, We're not showing you an arc an org chart that's intentional because each institution is different, and I do not want you to think this work for UCLA. This will work for you. We're really gonna dive into right now are some of those key operational roles here.
And so this is just a table. And so taking you briefly through this because we have a lot more slides is on the left hand side here, we have portfolio, product manager, principal product manager, and the way I want you to think about this, and product mentality with we have forty LTIs right now, some of them at root, some of them subaccount level, and and the learning management system, Canvas and Brew and Learn. Is these are the folks that are really responsible for engagement with governance, requirements gathering, thinking about the roadmap, thinking about prioritization, really kind of front and center. Thinking about that evolution. Then we have our academic technology manager and our academic technology analyst.
These are folks that are log in to service now daily answering tickets, holding trainings, escalating to the vendor. There are experts in these tools. Right? So if somebody says, How do I do x, y, and z, or this doesn't look the same way it looked yesterday? Can you look at it? Those are the people that are really dealing with that. These folks, in addition to answering tickets, and I think this is unique to our center of excellence, they're also really engaged in the community and will attend faculty meetings. Or chain events on campus so that they can stay up to date and what are priorities.
And then lastly, we have our LMS administrator. We have a couple here, depending on this scale and size and scope, you might have more. These are folks that are configuring the LTIs, configuring settings like term dates, working kind of a little bit more on the back of the house of things. This group is really important for us because not only are they working within Canvas, but they're working really closely with our application development team. I'd mentioned we have a hundred and seventeen customized, service integrations.
And so they'll be help they help with testing that, prioritizing that, and building those out. Okay. So we talked a little bit about the who, and now we're gonna go shift a little bit to some of those questions about what gets done? And so prioritization, if you ask ten people, what do I prioritize? They're gonna give you ten different answers. And that's normal. That's a human instinct.
Right? And so what we decided to do is to think about some guiding principles to help us with decision making. And this was actually formed a little bit more because when I joined, UCLA, and I looked at our backlog, I think we had a hundred plus things in our backlog. And I was like, how do how do we even utilized delivering on these backlog items. And so we're gonna talk about some of these principles today, but I'll run through them quickly here. Achievable, achievable, is it possible? It's the first thing we ask ourselves.
Secondly, is desirability? What's the effect on the community? Third, impact, how valuable is the proposed change, and value can mean many different things. And so please don't take this as a hard and fast rule, and then effort, how much time and resources would be required. So we're gonna drill in a little bit to achievability, desirability, and impact, and I'm gonna go here with a little bit of a flowchart here. So, like, is something achievable? Is it possible? These are the questions we ask ourselves here. Can it be done by the Brew and Learn CLE? And if the answer is yes, we can do it in house internally.
That takes its own kind of, process in terms of how we fulfill that request. Secondly, yes, it it could be done, but we are relying on other resources at UCLA. And that has its own pathway and engagement in terms of prioritization and conversations. Third year would be No. It's not something we can do, but we can escalate it to the vendor.
And that might set its own expectations, with governance and with the community. Then lastly, no. It's just not it's not something that we can do maybe for a policy reason or some limitation within technical, environment there. That's the first question. Second desirability.
So what's the effect on the larger Brewan Learn community? It probably seems obvious that if there's a positive effect, we would proceed with that. Right? But most of the requests that we get fall into this unclear bucket. Right? So a request can just be a ticket that says, I would like this tool or I would like to do this thing or somebody down campus. And so, normally, that requires a little bit more conversation to understand understand what they really mean by that. Unintended consequences, This happens more often than you think, right, where you think that something, is desirable, and you turn on a setting, and all of a sudden it changes something else in the environment.
And so we have to think about that well. And then impact, and I I had copy outed this before that impact can be measured in so many different ways. So these are the questions that we ask ourselves here. How valuable is the proposed change to address key issues? So I'm gonna kinda skip down to the bottom here. The metric that a lot of people use is how many users? Well, oh, is this twenty thousand students will positively be impacted by this.
I should prioritize it. That's not always the case, but it's one one intake, one question we ask ourselves. And this this information we present to the Burwin Networking Group. Right? Does the request address any known pain points? It might be a request and then it might actually have a cascading effect and help with five different things that we have in our backlog. That's that's a consideration we talk about prioritization.
And then what the what, does this align with strategic goals or priorities? At UCLA, we have a lot of courses that have five hundred plus students enrolled them, and we know that we're trying to increase enrollment, and they're trying to increase those size of those courses. And so we know a strategic priority is to make sure that we can enable the teaching of those large enrolled courses easier specifically around grading or some other strategies. Alright. So this is my second favorite slide of the day. This is, if we map out on a matrix here, we have desirability on the bottom that we just talked about, it impacts on the left hand side starting to mix things together.
Again, this is a framework to talk to governance when we have faculty and folks in the room that might not be IT professionals. Right? Or might have their own biases around what we prioritize to help think about where do things fall on this scale here. And so I'm gonna call your attention top left here, something that's a high impact, high desirability seems like a sure thing. Of course, we should do that. Right? But oftentimes when these things are highly impact full.
They actually require a very, different set of change management and rollout strategies, and so that's like a trigger for us as well. And so the example I gave here is upgrading from Kaltura version one one to one three. Right? Changes the interface, changes features and functionality, fixes bugs, So it's not just like, oh, yes, we do it, and everything sticks and everyone's happy. There is this high impact component here. I also wanna call your attention onto the right side, which is the must dos, we'd set in governance context really straight that there's gonna be things we have to prioritize because they just need to get done.
Right? They might not have a high desirability, but there might be some bugs impacting that we just need to roll that out. We've dealt with that a few times, or we need to fix things. System performance, etcetera. I'm gonna invite my colleagues here if they wanna talk anything else about this slide here, so I know. You want to.
I was just gonna add that like Alana mentioned, there were over a hundred enhancements in the backlog when we made that transition from the end of the project. Into operations. And having this type of structure and huge shout out to Alana for kinda setting this up is is gonna be critical and super helpful for kind of evaluating all of those items. That's probably like yes, duh. But during the project itself, you know, there were a lot of enhancement requests, and we had to prioritize actually being ready for those go lives, having all of our core requirements and configuration set.
And then saying, okay, we hear you, and we we understand there are all of these existing enhancement requests, but those are gonna come as part of our future state sustained model. So it was a really nice transition from, again, project to operations, having this in place and being able to bring this to the Berlin working group and even the support community. Alright. I'm gonna add one more thing that I think is really relatable. How many of you have come from an institution that had a self hosted LMS.
Okay. Enough. So there's like a special There's a special challenge that comes when you're going from a self hosted highly customized LMS to a SaaS product that is not meant to be highly customized. I see all of the nods. This is absolutely true at UCLA as well.
In a self hosted environment, the community tends to get really, attached to the idea that if you have a need, somebody will go build that for you. Right? I need this button. I submitted a ticket. Somebody's gonna go build the button because we own that platform and we have the resources to do it. I think that this framework really emphasizes UCLA's approach to how to navigate that transition.
Right? It's not as simple as I'm sorry. We're not going to do that anymore. It has to be meaningful and and the community really needed to wrap their heads around how we would think about prioritizing and selecting the items that really were impactful, and and come to the understanding that we probably aren't going to move forward with all the requests. But at least now we have a framework to explain at what phase of that process did we determine that that's the case and why? And I'll be honest, what we're having a conversation right now is do we think about every term? Most of our updates happen before the start of term because we know that's when instruction's not happening. Do we always carve out time for something that's low impact? And high desirability, because maybe that won't be selected always, but we wanna make sure that we're including that.
So we're all we're iterating on this framework as go. So last slide that I'm gonna show you, before we wrap it up and open up to questions a little bit further is communicating and Right? So we talked about product management and user experience and some frameworks that we have in place here. But unless people know about them, it's not really an effective strategy. And so what I'm gonna show you is, a template that we created for when we communicate updates, things that we want people to know, that's consistent, and so they understand, like, the why here. And so I'll take you through this really quickly.
This is an example of something completed. What we aim to do is about a month before an enhancement comes out, we share these slides with the community, the Broinland support community, the Brinland Working Group, anybody else who's involved, ask them for any questions. And so what we have on across here is a tool or product. This one happens to be Brew and Learn, right, our main learning management system. What would be changing, so just being as to the point as possible that we could be.
How will this improve user experience. This was a change in culture to say, this is what we're thinking about. If we cannot articulate why this would improve something, then maybe this is an enhancement we shouldn't be proceeding with. Why was this prioritized? This transparency was new for a lot of people. They might not always agree why something was prioritized, but at least stating this is why something is prioritized.
Lets people know that we're acting because this came from the top down or acting because this is a compliance issue. We're acting because canvas is making us do it. Right? And the actions, resources, and when will this propose, change take place. And then when we're done, we write completed This is a, a Google doc. It's a living document that is accessible, and so you can see all of our changes that we've ever made, since we started sustained state.
Here's a few other examples here. I'm not gonna read through them, but just kinda show you in practice. I love there's exactly two reactions to these slides. There is the heavy squint that most people were doing. And then there was the, like, technology enabled who took a picture and zoomed in on their phone the whole time.
Yep. Yep. I really like that. Yeah. Alright.
So our last slide before you open the questions, I'm gonna hand it hand it back off. Okay. I'm gonna wrap it up. You have spent thirty slides worth of time listening to us. So your reward is the takeaway slide.
This is everyone's favorite. You can take a picture and you can forget everything else we talked about because here are your takeaways. Please don't. Okay. What are the things that are most important to think about from our perspective as you go through this effort to stand up your sustained operations? SaaS products, especially products like Canvas, but this applies to many different SaaS products really benefit from this more product focused approach.
Right? Thinking about it as a product and that it should be managed as such. With that in mind, there's certain staffing requirements that exist as part of that. There aren't that many schools that have product managers on staff. The LMS is not very often thought about as a product to be managed in that way. How do you manage releases? How do you work on your roadmap? Right? There's a lot of we've implemented and now it's done, and we will we will just maintain it now.
And so this idea of thinking forward and thinking about growth and innovation, it really needs to be enabled by both the approach and the staffing, that goes along with it. We also need to think about custom app dev. Even if your institution doesn't have a large engineering team, or an application development team, there will be custom development. There is almost no way to avoid it. Even if it's just your SIS integration, or your core platforms that you're integrating, there will continue to be needs for custom development, and it's important to plan for that ahead of time.
LTI tools. Okay. Yes. If you move IT tools to a centralized model and make them available to the entire enterprise, like eye clickers, it's going to cost more money, but the reach is so much greater. We're impacting the entire student and faculty population as opposed to paying a lot less and impacting maybe only one or two academic units.
Academic partnership is really, really, really important. Remember, we don't want it to feel like just an IT project. We want it to feel like a collaboration between the academic entities within the institution. And the IT entities. We want them to collaborate and work together and really inform the approaches that are used to implement the product as you go through your implementation.
Lastly, is like tap into the functional expertise on the team. There is so much institutional knowledge that can be gained from having people at the institution work on the project meaning not just hiring outside of the institution for that project. Taking advantage of the historical knowledge of the of the institution understanding the technology and applications that came before, even knowing the sentiment of the faculty and staff and what excited them previously and what what to watch out for so you don't make those mistakes again. So being able to tap into that knowledge and take advantage of your community is is just so critical to the success of the program. And we're done.
Anchor drop. Yeah. Any questions? Yep. Go ahead. I have a question about your backlog requests.
So you said if was decentralized before. So were those requests coming into, like, one place or were they coming from all over? Yep. So the question I'm gonna paraphrase it. Let me know if it's accurate is with all the backlog requests that we currently have and are growing by the week, how are those being intaked? How are we receiving them or how are we aggregating them? Yeah. I just wondered, like, were you collecting those from all of these different? Yes.
So they come in a variety of fashions. They come in through service, now tickets. They come in through one on one meetings. Get a lot in slack. Get a lot just, in trainings with faculty.
So it actually took us about five or six months to aggregate all that together into a singular repository. And now we're thinking about how do we how do we, present them? Hopefully next year, we'll be here, and we'll show you what our roadmap looks like and how we manage that portfolio. It came in from a lot of different areas. And it it's I'd say, we didn't it's not perfect, but tasking someone, and it's the product managers that we in this particular scenario, not the service, delivery folks, to read through it and aggregate it altogether. Oh, yeah.
They're still coming in from all over. We are gonna be setting up a singular intake, through a product, tool. So that we don't need to manage it that way. And we will be having voting on those as well as community input there. Yeah.
I'm looking at one of my colleagues here who can might help me with that. I would say on a given term, and so we're in the quarter, we're in the quarter system, identic quarter, we probably get between twenty and thirty requests. And these can be big ones like I'm gonna buy this tool and I need you to help us implement it. Or they can be little ones. It's like I need this this quizzing functionality to be available, or they can be something specific like can we change the navigation.
So they vary. Yep. Thank you for the question. Yep. For your instructors and students.
How many, elements administrators and development do you have? Sure. So the question was, UCLA, how many LMS administrators do we have? You're talking particularly about the function of configuring LTIs and doing yep. So we have two right now. Right. That's not enough.
We have a lot of manual processes also at UCLA. So we have, like, manual cross listing. So at the beginning of every term, it takes a considerable amount of time. So it's really I'd say it's dependent on what's automated versus what's manual. Before six hundred and nine subaccounts, forty thousand plus students to LMS administrators is just not enough.
So we do. We have developers. We have an application development team that's actually not on our team, that we work cross functionally with. And we have two full time resources dedicated to that, just for the Brew and Learn CoE. No.
It's a fair question. I think, it's enough for right now. It will need to make some decisions in your future if we outsource to vendors or if we continue in house development. Thank you. Types that you were describing.
And then how Oh, the roles? Each of those. Yeah. Like, what is what's the Yep. So the principal product manager and the product manager are the folks that are really dealing with kind of the the backlog. Daily they helped come up with that, matrix model, starting to think about how we prioritize things.
They're also thinking about vendor capabilities, So the current forty plus tools that we have looking at them to see where they overlap, where they grow, where they evolve. So they're a little bit more on the product side without thinking about use cases, how are we evolving, how are we engaging? I think we'll have to know how many. Oh, how many? I'm sorry. Yes. Yes.
Sorry. We have one principal product manager. One product manager, one academic technology manager, four academic technology analysts, and two LMS administrators. We also have a cross functional, like I said, application development and some, change management folks as well. All of your ed tech courses focused solely on? So it's on, our learning management system and the ed tech tools that we're responsible for.
Things like DocuSign and Google, those are on a different team. Yeah. So the elevated permissions we have in the subaccount admins vary based on what was decided early on in terms of the from the Dean or academic leadership. They can customize the way their course navigation looks. They can customize roles and permissions for their particular instructors or TAs or instructional designers.
There's considerable debate about how much we should allow the distributed IT folks to customize it. But for right now, as long as those folks are working closely with us, it's okay. It's much of a lot of thought and effort into the community support model itself. And so I was wondering, were there any things that you the structure or how much of your own. Yeah.
This sounds like a an implementation question. So I'm gonna ask Carl or Sally if they wanna cover Yeah. I hate the question too. Yeah. So the question was how much of the the resource and documentation and community support was developed in house versus taking advantage of pre existing resources.
Secret, I used to work for Instructure in my previous life. So I'm gonna come at this from an interesting angle. Most schools use the pre existing documentation. Like, in my experience, they're like, boom, done, link out to it, call it a day. It's always gonna be updated.
Right? So that's like a really common approach. UCLA is not common. UCLALA wants to do their own thing. So every training video, piece of documentation, resource guide was developed in house and branded for Brew and Learn. There wasn't the word canvas anywhere on the documents.
Everything was branded to feel like a really cohesive experience. Yes. There was a lot of input that went into that to make that really successful. But that was an investment that UCLA really felt like was important from a brand perspective is to retain that brand within the institution. Yeah.
I was it was hosted in the in Brewan Learn, and then we had all the other resources on the actual on our, like, public facing website. And I think we've got just one minute left or we're right at time. Is there any final question that we can answer? Yeah. Did you require faculty to do training? The question is did we require faculty to do training? I'm looking at my colleagues from the team here, and I the the answer is no. Strongly encouraged It was hard to work with our campus leadership to make anything mandatory.
We did have designated folks in all the academic units as leads. That were responsible for liaisoning and bringing things back into often times. There were trainings that we did within faculty meetings. So bringing our trainings to them. In terms of asynchronous trainings and kind of milestones to gain access into systems, no.
Did not do that. Well, thank you all. You are a great audience. Please feel free to reach out to us if you have any questions.
I think the slides will be available afterwards, but the way that we've, constructed, This presentation is to be a framework and templates and things that you can take back to your home institutions to either modify them or apply them, the way you see fit. So The title today is we have a new LMS now what? Discovering and empowering user focused operations, UCLA's LMS implementation journey. So first, we're gonna start with some introductions. I will start with myself. Hi, I'm Alana internato.
I am the director of our Brew and Learn Center of Excellence. Center of Excellence is something at UCLA that we consider both an IT service provider. So central within the ITS organization, but also a strategic enabler working very closely with our teaching and learning, constituents, thinking about best practices, thinking about how we get involved in the community, not only within UCLA, but the UC system. I joined in the fall of twenty twenty two right when this project was, shifting from implementation to sustain state So I'm gonna touch a lot about that sustained operations, and then my colleagues here are gonna talk a little bit about the implementation. Hi, everyone.
I'm Carly Hillman. I was part of the UCLA implementation team as the program director. In my day to day, I'm a senior manager with Deloitte as part of our higher education practice. I've been with Deloitte for over ten years, working with higher education and other public sector clients. I also lead our LMS capability as part of our higher education practice.
So, very much embedded in this space. And like Alana said, involved with the UCLA program was the per director. So I'm gonna be speaking a little bit more about that, to set some context for our journey here today. And lastly, I am Sally Maholski. I served, on the Deloitte side of the project as well.
I was the LMS subject matter expert on the UCLA implementation project. I worked cross functionally across all of the work streams, that were represented during the implementation. So that includes organizational change management, academic services as well as the IT, space for the project. I've over ten years of experience in LMS and academic technology. I've worked across the K twenty space.
And I'm really excited to bring some of that expertise to the table to support both Deloitte and UCLA. Alright. So here's our agenda. I'm gonna calibrate you for a moment here. We're gonna walk through four areas.
And I heard I'm gonna try something. I've heard instructurecon likes a theme. So I'm gonna introduce this agenda in a little bit of a theme y way. So if you like it, laugh. If not, I'll be embarrassed for the rest of the day.
It's fine. So our voyage will take us across the implementation terrain of UCLA's new LMS system. We'll learn about the strategies used, the storms we weathered, and the triumphs that we accomplished here. We'll then set sail on the sea of transparency. We'll dive deep into how engagement with stakeholders can inform and guide our evolution journey of our learning management system, and it's just that.
I think everyone here can agree. It's a constant change in evolution. Our third stop will bring us to the shores of operational excellence. Here, we will discover how we construct an operating model that intertwines product management and user experience. And of course, what's a journey without a chance to explore.
We'll drop anchor and we'll have a Q and A session for you at the end of this. Thank you. Thank you. Thank you. Thank you.
Some of the key takeaways here, considerations for developing an operating model focused on product management and operational efficiency, And then some best practices, for establishing transparent governance. I know governance can sometimes be a bad word, has bad connotations. We're gonna talk a little bit about what we're doing at UCLA, that informs continuous evolution of your learning management system. So now I'm gonna hand it over, to Carly. Alright.
So we're just gonna start with a little bit of context around what was the LMS implementation at UCLA, what was the scope of this, what was the structure of team and how did we approach it. And then we'll get into a little bit more on operations like Alana beautifully outlined. So just to start us off, the LMS implementation, it took place over the course of primarily over one academic year. There are a number of metrics and kind of key things to highlight here on the right hand side of the slide, or left hand side, I guess, if you're looking at it. But we did have two pilots.
We did have go lives sequentially across, the different academic quarters. And this did cover the scope of the entire UCLA campus, moving off of Moodle, and onto campus. Just to provide a little bit more context, This wasn't just about moving off of Moodle, which is a very, custom, homegrown type of platform onto a SAS platform, which we all know as Canvas. It was also about moving from a distributed service model onto a more campus unified approach. And so we really wanted to focus when we were building this implementation team on bringing in subject matter experts, and other campus leaders to be part of the program team.
So what you see on the other part portion of the slide here are our key work streams, which really enabled cross collaboration across and throughout the implementation. Starting with academic on the bottom left there, we brought in a team of teaching and learning specialists instructional designers, and others throughout the campus who had had previous roles or had those skill sets specifically in the academic focus, teaching and learning space. Just to support, to support the foundational components of the LMS. Then we had app app developments. This was our IT experts focusing on the integrations, making sure we had all the back end systems configured and, you know, supporting all the technical aspects that we all know need to happen as part of the configuration and implementation.
Then we also had a hyper care or post go live support team thinking about those aspects and where users would go for support ties into a lot of the work that Alana came in and led. But we had a team current that started with the implementation team to think about and develop that initial structure for where users would go for their support and have access to many, many resources If I keep going around the circle here, it's not really circle, but OCM, focused on all of our engagement all of our engagement with faculty, and other instructional staff, as well as our Dean's vice provost, a lot of leadership across campus to have that tailored approach. So We took a very white glove, approach when it came to communicating and engaging with all of our key stakeholders, and that was what this worksheet was truly focused on. And then we had a separate work stream on data and reporting, building out robust dashboards that helped with that engagement and tracking across the implementation. We built some dashboards specifically in Tableau and use those for our weekly reporting, which were really helpful for making sure we were tracking against our milestones.
And then if I skip ahead and and think about some of the lessons learned as we implemented the system, the LMS, and then transition to our operational phases. There were really kind of two key buckets that we looked at. One, around people focused communication, and all of the engagement that we had with our stakeholders. And then also anticipating timeline risks. For that first bucket, This kind of follows along with the work streams that I mentioned on the previous slide.
So it really making sure we had academic technical and administrative involvement throughout the entire implementation and it not just being seen as just a technical implementation and making sure we were involving all of those aspects across the the program. We also really tried to focus on, and I think we could you can always continue to do this more. It's not just a technical implementation. But we're trying to make changes from a teaching and learning perspective and really, enforce the transformation along with the support that I mentioned from moving from that more actualized model to the the unified support approach. Also always important to set those expectations for our end users around when those key points are happening across the program, like content migration, how to use the platform, key dates and other details to make sure everyone's aware what's going on with the transformation.
And then again involving university leadership was key for the success of this implementation. And then again, if I put my project manager hat on here thinking about all the timeline risks, we did follow closely with the academic calendar, but always keeping that in mind and not having go lives right around other key milestones, across the the academic calendar. We did have a couple of goal lives as you saw on the previous slide, but really taking advantage of proof of concepts and MVPs testing out l Is engaging focus groups and kind of baking that into your overall plan. Also being delivered about the end of the project and the beginning of sustained operations, So we kind of had this a little bit fluid for a while. And then, Wanna came in, and we were we had this clear timeline around transitioning to sustain operations, but super important for engaging with all of your your end users.
And then the last two bullets are really around consultants and contractors coming into support and what that transition plan will look like and thinking about that well in advance, as well as hiring for things like your long term sustained model and the timelines and how long that might take in anticipating contingency if you need to have other support in the interim while you're fully staffing up for that. Sally's gonna talk a little bit more about our engagement and some of the other expectations we set. Okay. Nobody's sleeping yet. Right? Everybody's still awake.
They're still following. Okay. Good. So You've heard centralized and decentralized a lot already in our presentation. It's a really important theme as to the way that we thought about the implementation of the system and also our sustained operations.
So I wanna highlight, a couple key areas that we were really thorough in thinking about with a special respect to the idea that the decentralized nature of the institution really lent itself to a lot of siloing. Right? Each academic unit is organized and managed independently at the start of this project. As part of that transition to a more centralized model, we're using the LMS as sort of that proof of concept to start to bring things together into more of a centralized model. So some of the things that we had to think about were roles and permissions, for example. Right? Canvas is very flexible about roles and permissions You can manage them, adjust them, delegate them.
You can have them at different subaccount levels. But when you think about the level flexibility and control that our academic units had prior to the implementation of Canvas, we wanted to help retain that sense of independence for those academic units while also maintaining a sort of standard, that was created by the implementation program. I think the same thing goes for the use of LTI tools, right, that is always a hot topic as part of implementations. When we started the program, we did, an inventory worry of all of the the LTI tools that existed across the institution. There were hundreds.
There was a lot of overlap. There were a lot of tools that had the same purpose. How many polling tools does one school need? And so as part of this exercise, I know there's some people who are thinking about that. As part of this exercise, we did a massive LTI evaluation project where we really looked at what are the best products for the institution And how can we expand their capabilities outside of just one academic unit and make them valuable for the entire enterprise? And so that was a really meaningful aspect of the project that we did. And we feel like that brought a lot of value, to the university.
Cis integrations and those dependencies, as well as how do we align the subaccount structure to the organizational structure of the university? It's really easy to put subaccount structure on paper, but when it doesn't align with your organizational structure or the way your cis is organized, You guys know what that looks like. It's super messy, and it's not great for data. So we really had to think critically about that. To give you perspective, UCLA has six hundred and nine sub accounts. Right? It's a lot of sub accounts.
I know that's horrifying to think about And I know there's some admins here because they're like And a hundred and seventeen customized SIS or AWS integration. So there's a lot there's a lot This is a really complex architecture. Now if you have a simple one that doesn't mean that this isn't going to apply to you, but it's just this idea of creating cohesion between like your SIS and your organizational structure, and the way that you organize Canvas around that. The last thing was prior to the implementation of Canvas, it wasn't made very obvious within the institution which tools and capabilities were meant to exist as part of the LMS. And which of those tools should and do live in other systems.
If that's the student portal, if that's the, you know, the faculty portal, if that's other spaces within the institution. So we wanted to make a really intentional effort to clearly message what capabilities should we expect and take advantage of within the LMS. And what areas should we look elsewhere for and and create those recommendations that people really felt confident they knew where to find the tools that they needed. On the other side, we're talking about the campus wide impact. There were a few things that came about, as really obvious to us that we had to focus on.
Organizational change management is key. Right? Too often these projects are treated as just IT projects and they are rolled out by IT and they are siloed and they do not include the thorough change management that helps us integrate our academic audiences, our community, and the other aspects of the project to ensure that it's really successful. We needed to make sure that there was clear ownership of that organizational change management and that it was clearly aligned to the goals of the project. We also, as Carly mentioned previously academic calendar, right? Like, do you want a go live right after Christmas? Because nobody's working right before that. So there's not gonna be any resources.
Don't ask me why I know that. And lastly, this really gave us the opportunity to refresh our, TPRM process, looking at things like security, accessibility, privacy. Right? What's the standard that we wanna use moving forward? Hint, UCLA is very rigorous. And we wanted to make sure that all of the LTI tools, all the integrations, all the configurations of the platform met that standard. Slide.
Okay. Shifting to operations. Okay. So as we're thinking about the end, the formal end of the implementation project and the transition to, like, what we're calling sustained operations or the, like, what comes next. We needed to think about some themes that would be applicable across centralized and decentralized, capabilities.
Right? So we kind of came up with these three pillars that we felt like were really represented across the board that we could use as a foundation to build out our strategy for our sustained operations. So we've got our technology expertise. Right? This is everything from infrastructure, custom app dev, compliance, all the favorite things. Analytics and data. Right? That's I actually really love analytics and data.
But thinking about learning comes, and advancement, and how we can take advantage of data to positively impact the institution. And then lastly, you know, we're trying to connect with peer institutions. What are what are peers doing? How can we take advantage of some of that? Community support was another one. This has many shape and forms. This is your FAQs.
This is your training. This is office hours. This is troubleshooting. Right? And this is really important to consider because every academic unit had their own community support model. How do you take that and make it really effective in a centralized manner? And then lastly, campus engagement.
I I truly think that this is one of the biggest reasons why this implementation was so successful was the really effective engagement of the community as part of that project. They were part of the solution they were advised. They were advising the project, whether it's on advisory committees, focus groups, student feedback, faculty feedback, This really drove the decisions that we made. It formed our governance for the project, and it ensured that we didn't run away with decisions that everything was thoroughly thought through. Alana's gonna talk a lot more about governance and decision making, and I am gonna start to turn this over to her.
Alright. So so governance and I mentioned at the top of this presentation that sometimes governance can have a reputation. And so what I wanna I wanna kinda highlight here is that governance is supposed to speed up decision making. It is not supposed to bottleneck it. That is that is the intent.
The intent is to think about resource allocation. It's to think about how decisions get made and alignment. And I wanna take you through this, this visual here. And talk through it a little bit from the bottom up, because I think it's important to think about how we scaffold this. So here we have at the bottom, we have something called the Brew and Learn Support Community.
This is open to all anybody on campus. It meets weekly. This is really those, front line soldiers day to day people dealing with distributed IT, faculty, students, etcetera. Sally had mentioned we have six hundred and nine sub accounts, about a hundred and fifty people with elevated permissions within Canvas. That means about a hundred and fifty people who are supporting the community besides our team.
So these are folks that we meet with weekly that they can, we can talk about priorities, think about documentation, a little bit more like a community of practice, a little bit more operational. We have the Brew and Learn Working Group. The Brew and Working Group is a formal governance body. There's about twenty members on it. We do have some of the distributed IT folks, but we also have the registrar.
We have accessibility folks. We have someone from student affairs. We have, instructors. We have some advisors. So this is a this is kind of a very diverse group.
That meets monthly. That's responsible for the direction of the Brew and Learn ecosystem. This group votes helps us prioritize, and then if needed escalates to the academic technology committee, the academic technology committee is a bunch of senior leaders, that are really more about, approving our recommendations, but most of the work happens at the Brew and Learn working group and the Brew and Land support community. Okay. So this is one of my favorite slides of the day.
And I hope I hope that this helps you, at your at your home institutions. Oftentimes with governance, everybody wants to be involved with decision making. Right? We there's twenty people in the Brooklyn working group I have probably weekly requests for people to join. It won't be fifty people, sixty people, seventy people. Everyone wants to be involved with decision making, but nobody actually wants to make decision or be a head honk over it.
And I see a lot of heads nodding. Right? So the way that we were thinking about approaching this, and this is something that we shared with the Brew and Learn working group and took us three months to align on. Right? But the way I want you to think about this year is decisions get happen and get made daily, weekly, monthly. Right? So on the left hand side here, characteristics or things that we think about when we're talking about making decisions. Right? Financial impact, effort, if there's custom development, service policy, etcetera.
The first column is the Brew and Learn Center of Excellence. That's my team. Right? We make decisions on a daily basis. If we had to bring everything to governance, it would really bottle neck progress and evolution. And so certain things that wouldn't be that disruptive within our purview.
We have the Brew and Lending Group where most of the decision making happens here. So make maybe it's a new LTI contractor vendor. If it's between fifty and a hundred and fifty thousand dollars, this group would make that recommendation and decide on it. And then the academic technology committee. And I'll let you look at this later and talk through it, and you can certainly ask questions at the end of this about it.
I'll give you an example of this in practice So Sally said, you know, how many polling tools does an institution actually need? We have a few at UCLA. One of those that we have at UCLAI clicker and it was a student paid model. So that means that if an instructor chose to use eye clicker, they'd put it on their syllabus, the students would have to buy a software license for their phone or mobile device or buy a clicker We shifted that to an institution paid model, essentially making it free for our students. And we grappled with, well, when do we when do we shift that model? So that if we do it in the spring term, fall students might say, well, I had to pay. Right? And so we really grappled with how do we do this? We brought it to this group They decided upon summer because it was the least disruptive, at least amount of enrollments.
It'll give us a test bed for when the fall happens and we have more students on campus. But that was kind of like our our test to see if this would work. And so I encourage you to think about this in your own institution. Alright. So now we're gonna talk a little bit more about product.
And what I wanna introduce here is some key operational questions. So these are questions that were asked when we were building the sustain model, also, I ask them to myself almost on a monthly basis. So things that you should be thinking about. Who's responsible for what? What will get done and where? Which environments, applications, and integrations that enable teaching and learning and automate processes will we utilize? We have decisions every day. There's ten tools that can do the same thing, give or take.
What will be reported and how, and how it will be overseen. So, again, I encourage you to think about how you you use this on your campus. So we're gonna talk about the first question, which is who is responsible for what, We're not showing you an arc an org chart that's intentional because each institution is different, and I do not want you to think this work for UCLA. This will work for you. We're really gonna dive into right now are some of those key operational roles here.
And so this is just a table. And so taking you briefly through this because we have a lot more slides is on the left hand side here, we have portfolio, product manager, principal product manager, and the way I want you to think about this, and product mentality with we have forty LTIs right now, some of them at root, some of them subaccount level, and and the learning management system, Canvas and Brew and Learn. Is these are the folks that are really responsible for engagement with governance, requirements gathering, thinking about the roadmap, thinking about prioritization, really kind of front and center. Thinking about that evolution. Then we have our academic technology manager and our academic technology analyst.
These are folks that are log in to service now daily answering tickets, holding trainings, escalating to the vendor. There are experts in these tools. Right? So if somebody says, How do I do x, y, and z, or this doesn't look the same way it looked yesterday? Can you look at it? Those are the people that are really dealing with that. These folks, in addition to answering tickets, and I think this is unique to our center of excellence, they're also really engaged in the community and will attend faculty meetings. Or chain events on campus so that they can stay up to date and what are priorities.
And then lastly, we have our LMS administrator. We have a couple here, depending on this scale and size and scope, you might have more. These are folks that are configuring the LTIs, configuring settings like term dates, working kind of a little bit more on the back of the house of things. This group is really important for us because not only are they working within Canvas, but they're working really closely with our application development team. I'd mentioned we have a hundred and seventeen customized, service integrations.
And so they'll be help they help with testing that, prioritizing that, and building those out. Okay. So we talked a little bit about the who, and now we're gonna go shift a little bit to some of those questions about what gets done? And so prioritization, if you ask ten people, what do I prioritize? They're gonna give you ten different answers. And that's normal. That's a human instinct.
Right? And so what we decided to do is to think about some guiding principles to help us with decision making. And this was actually formed a little bit more because when I joined, UCLA, and I looked at our backlog, I think we had a hundred plus things in our backlog. And I was like, how do how do we even utilized delivering on these backlog items. And so we're gonna talk about some of these principles today, but I'll run through them quickly here. Achievable, achievable, is it possible? It's the first thing we ask ourselves.
Secondly, is desirability? What's the effect on the community? Third, impact, how valuable is the proposed change, and value can mean many different things. And so please don't take this as a hard and fast rule, and then effort, how much time and resources would be required. So we're gonna drill in a little bit to achievability, desirability, and impact, and I'm gonna go here with a little bit of a flowchart here. So, like, is something achievable? Is it possible? These are the questions we ask ourselves here. Can it be done by the Brew and Learn CLE? And if the answer is yes, we can do it in house internally.
That takes its own kind of, process in terms of how we fulfill that request. Secondly, yes, it it could be done, but we are relying on other resources at UCLA. And that has its own pathway and engagement in terms of prioritization and conversations. Third year would be No. It's not something we can do, but we can escalate it to the vendor.
And that might set its own expectations, with governance and with the community. Then lastly, no. It's just not it's not something that we can do maybe for a policy reason or some limitation within technical, environment there. That's the first question. Second desirability.
So what's the effect on the larger Brewan Learn community? It probably seems obvious that if there's a positive effect, we would proceed with that. Right? But most of the requests that we get fall into this unclear bucket. Right? So a request can just be a ticket that says, I would like this tool or I would like to do this thing or somebody down campus. And so, normally, that requires a little bit more conversation to understand understand what they really mean by that. Unintended consequences, This happens more often than you think, right, where you think that something, is desirable, and you turn on a setting, and all of a sudden it changes something else in the environment.
And so we have to think about that well. And then impact, and I I had copy outed this before that impact can be measured in so many different ways. So these are the questions that we ask ourselves here. How valuable is the proposed change to address key issues? So I'm gonna kinda skip down to the bottom here. The metric that a lot of people use is how many users? Well, oh, is this twenty thousand students will positively be impacted by this.
I should prioritize it. That's not always the case, but it's one one intake, one question we ask ourselves. And this this information we present to the Burwin Networking Group. Right? Does the request address any known pain points? It might be a request and then it might actually have a cascading effect and help with five different things that we have in our backlog. That's that's a consideration we talk about prioritization.
And then what the what, does this align with strategic goals or priorities? At UCLA, we have a lot of courses that have five hundred plus students enrolled them, and we know that we're trying to increase enrollment, and they're trying to increase those size of those courses. And so we know a strategic priority is to make sure that we can enable the teaching of those large enrolled courses easier specifically around grading or some other strategies. Alright. So this is my second favorite slide of the day. This is, if we map out on a matrix here, we have desirability on the bottom that we just talked about, it impacts on the left hand side starting to mix things together.
Again, this is a framework to talk to governance when we have faculty and folks in the room that might not be IT professionals. Right? Or might have their own biases around what we prioritize to help think about where do things fall on this scale here. And so I'm gonna call your attention top left here, something that's a high impact, high desirability seems like a sure thing. Of course, we should do that. Right? But oftentimes when these things are highly impact full.
They actually require a very, different set of change management and rollout strategies, and so that's like a trigger for us as well. And so the example I gave here is upgrading from Kaltura version one one to one three. Right? Changes the interface, changes features and functionality, fixes bugs, So it's not just like, oh, yes, we do it, and everything sticks and everyone's happy. There is this high impact component here. I also wanna call your attention onto the right side, which is the must dos, we'd set in governance context really straight that there's gonna be things we have to prioritize because they just need to get done.
Right? They might not have a high desirability, but there might be some bugs impacting that we just need to roll that out. We've dealt with that a few times, or we need to fix things. System performance, etcetera. I'm gonna invite my colleagues here if they wanna talk anything else about this slide here, so I know. You want to.
I was just gonna add that like Alana mentioned, there were over a hundred enhancements in the backlog when we made that transition from the end of the project. Into operations. And having this type of structure and huge shout out to Alana for kinda setting this up is is gonna be critical and super helpful for kind of evaluating all of those items. That's probably like yes, duh. But during the project itself, you know, there were a lot of enhancement requests, and we had to prioritize actually being ready for those go lives, having all of our core requirements and configuration set.
And then saying, okay, we hear you, and we we understand there are all of these existing enhancement requests, but those are gonna come as part of our future state sustained model. So it was a really nice transition from, again, project to operations, having this in place and being able to bring this to the Berlin working group and even the support community. Alright. I'm gonna add one more thing that I think is really relatable. How many of you have come from an institution that had a self hosted LMS.
Okay. Enough. So there's like a special There's a special challenge that comes when you're going from a self hosted highly customized LMS to a SaaS product that is not meant to be highly customized. I see all of the nods. This is absolutely true at UCLA as well.
In a self hosted environment, the community tends to get really, attached to the idea that if you have a need, somebody will go build that for you. Right? I need this button. I submitted a ticket. Somebody's gonna go build the button because we own that platform and we have the resources to do it. I think that this framework really emphasizes UCLA's approach to how to navigate that transition.
Right? It's not as simple as I'm sorry. We're not going to do that anymore. It has to be meaningful and and the community really needed to wrap their heads around how we would think about prioritizing and selecting the items that really were impactful, and and come to the understanding that we probably aren't going to move forward with all the requests. But at least now we have a framework to explain at what phase of that process did we determine that that's the case and why? And I'll be honest, what we're having a conversation right now is do we think about every term? Most of our updates happen before the start of term because we know that's when instruction's not happening. Do we always carve out time for something that's low impact? And high desirability, because maybe that won't be selected always, but we wanna make sure that we're including that.
So we're all we're iterating on this framework as go. So last slide that I'm gonna show you, before we wrap it up and open up to questions a little bit further is communicating and Right? So we talked about product management and user experience and some frameworks that we have in place here. But unless people know about them, it's not really an effective strategy. And so what I'm gonna show you is, a template that we created for when we communicate updates, things that we want people to know, that's consistent, and so they understand, like, the why here. And so I'll take you through this really quickly.
This is an example of something completed. What we aim to do is about a month before an enhancement comes out, we share these slides with the community, the Broinland support community, the Brinland Working Group, anybody else who's involved, ask them for any questions. And so what we have on across here is a tool or product. This one happens to be Brew and Learn, right, our main learning management system. What would be changing, so just being as to the point as possible that we could be.
How will this improve user experience. This was a change in culture to say, this is what we're thinking about. If we cannot articulate why this would improve something, then maybe this is an enhancement we shouldn't be proceeding with. Why was this prioritized? This transparency was new for a lot of people. They might not always agree why something was prioritized, but at least stating this is why something is prioritized.
Lets people know that we're acting because this came from the top down or acting because this is a compliance issue. We're acting because canvas is making us do it. Right? And the actions, resources, and when will this propose, change take place. And then when we're done, we write completed This is a, a Google doc. It's a living document that is accessible, and so you can see all of our changes that we've ever made, since we started sustained state.
Here's a few other examples here. I'm not gonna read through them, but just kinda show you in practice. I love there's exactly two reactions to these slides. There is the heavy squint that most people were doing. And then there was the, like, technology enabled who took a picture and zoomed in on their phone the whole time.
Yep. Yep. I really like that. Yeah. Alright.
So our last slide before you open the questions, I'm gonna hand it hand it back off. Okay. I'm gonna wrap it up. You have spent thirty slides worth of time listening to us. So your reward is the takeaway slide.
This is everyone's favorite. You can take a picture and you can forget everything else we talked about because here are your takeaways. Please don't. Okay. What are the things that are most important to think about from our perspective as you go through this effort to stand up your sustained operations? SaaS products, especially products like Canvas, but this applies to many different SaaS products really benefit from this more product focused approach.
Right? Thinking about it as a product and that it should be managed as such. With that in mind, there's certain staffing requirements that exist as part of that. There aren't that many schools that have product managers on staff. The LMS is not very often thought about as a product to be managed in that way. How do you manage releases? How do you work on your roadmap? Right? There's a lot of we've implemented and now it's done, and we will we will just maintain it now.
And so this idea of thinking forward and thinking about growth and innovation, it really needs to be enabled by both the approach and the staffing, that goes along with it. We also need to think about custom app dev. Even if your institution doesn't have a large engineering team, or an application development team, there will be custom development. There is almost no way to avoid it. Even if it's just your SIS integration, or your core platforms that you're integrating, there will continue to be needs for custom development, and it's important to plan for that ahead of time.
LTI tools. Okay. Yes. If you move IT tools to a centralized model and make them available to the entire enterprise, like eye clickers, it's going to cost more money, but the reach is so much greater. We're impacting the entire student and faculty population as opposed to paying a lot less and impacting maybe only one or two academic units.
Academic partnership is really, really, really important. Remember, we don't want it to feel like just an IT project. We want it to feel like a collaboration between the academic entities within the institution. And the IT entities. We want them to collaborate and work together and really inform the approaches that are used to implement the product as you go through your implementation.
Lastly, is like tap into the functional expertise on the team. There is so much institutional knowledge that can be gained from having people at the institution work on the project meaning not just hiring outside of the institution for that project. Taking advantage of the historical knowledge of the of the institution understanding the technology and applications that came before, even knowing the sentiment of the faculty and staff and what excited them previously and what what to watch out for so you don't make those mistakes again. So being able to tap into that knowledge and take advantage of your community is is just so critical to the success of the program. And we're done.
Anchor drop. Yeah. Any questions? Yep. Go ahead. I have a question about your backlog requests.
So you said if was decentralized before. So were those requests coming into, like, one place or were they coming from all over? Yep. So the question I'm gonna paraphrase it. Let me know if it's accurate is with all the backlog requests that we currently have and are growing by the week, how are those being intaked? How are we receiving them or how are we aggregating them? Yeah. I just wondered, like, were you collecting those from all of these different? Yes.
So they come in a variety of fashions. They come in through service, now tickets. They come in through one on one meetings. Get a lot in slack. Get a lot just, in trainings with faculty.
So it actually took us about five or six months to aggregate all that together into a singular repository. And now we're thinking about how do we how do we, present them? Hopefully next year, we'll be here, and we'll show you what our roadmap looks like and how we manage that portfolio. It came in from a lot of different areas. And it it's I'd say, we didn't it's not perfect, but tasking someone, and it's the product managers that we in this particular scenario, not the service, delivery folks, to read through it and aggregate it altogether. Oh, yeah.
They're still coming in from all over. We are gonna be setting up a singular intake, through a product, tool. So that we don't need to manage it that way. And we will be having voting on those as well as community input there. Yeah.
I'm looking at one of my colleagues here who can might help me with that. I would say on a given term, and so we're in the quarter, we're in the quarter system, identic quarter, we probably get between twenty and thirty requests. And these can be big ones like I'm gonna buy this tool and I need you to help us implement it. Or they can be little ones. It's like I need this this quizzing functionality to be available, or they can be something specific like can we change the navigation.
So they vary. Yep. Thank you for the question. Yep. For your instructors and students.
How many, elements administrators and development do you have? Sure. So the question was, UCLA, how many LMS administrators do we have? You're talking particularly about the function of configuring LTIs and doing yep. So we have two right now. Right. That's not enough.
We have a lot of manual processes also at UCLA. So we have, like, manual cross listing. So at the beginning of every term, it takes a considerable amount of time. So it's really I'd say it's dependent on what's automated versus what's manual. Before six hundred and nine subaccounts, forty thousand plus students to LMS administrators is just not enough.
So we do. We have developers. We have an application development team that's actually not on our team, that we work cross functionally with. And we have two full time resources dedicated to that, just for the Brew and Learn CoE. No.
It's a fair question. I think, it's enough for right now. It will need to make some decisions in your future if we outsource to vendors or if we continue in house development. Thank you. Types that you were describing.
And then how Oh, the roles? Each of those. Yeah. Like, what is what's the Yep. So the principal product manager and the product manager are the folks that are really dealing with kind of the the backlog. Daily they helped come up with that, matrix model, starting to think about how we prioritize things.
They're also thinking about vendor capabilities, So the current forty plus tools that we have looking at them to see where they overlap, where they grow, where they evolve. So they're a little bit more on the product side without thinking about use cases, how are we evolving, how are we engaging? I think we'll have to know how many. Oh, how many? I'm sorry. Yes. Yes.
Sorry. We have one principal product manager. One product manager, one academic technology manager, four academic technology analysts, and two LMS administrators. We also have a cross functional, like I said, application development and some, change management folks as well. All of your ed tech courses focused solely on? So it's on, our learning management system and the ed tech tools that we're responsible for.
Things like DocuSign and Google, those are on a different team. Yeah. So the elevated permissions we have in the subaccount admins vary based on what was decided early on in terms of the from the Dean or academic leadership. They can customize the way their course navigation looks. They can customize roles and permissions for their particular instructors or TAs or instructional designers.
There's considerable debate about how much we should allow the distributed IT folks to customize it. But for right now, as long as those folks are working closely with us, it's okay. It's much of a lot of thought and effort into the community support model itself. And so I was wondering, were there any things that you the structure or how much of your own. Yeah.
This sounds like a an implementation question. So I'm gonna ask Carl or Sally if they wanna cover Yeah. I hate the question too. Yeah. So the question was how much of the the resource and documentation and community support was developed in house versus taking advantage of pre existing resources.
Secret, I used to work for Instructure in my previous life. So I'm gonna come at this from an interesting angle. Most schools use the pre existing documentation. Like, in my experience, they're like, boom, done, link out to it, call it a day. It's always gonna be updated.
Right? So that's like a really common approach. UCLA is not common. UCLALA wants to do their own thing. So every training video, piece of documentation, resource guide was developed in house and branded for Brew and Learn. There wasn't the word canvas anywhere on the documents.
Everything was branded to feel like a really cohesive experience. Yes. There was a lot of input that went into that to make that really successful. But that was an investment that UCLA really felt like was important from a brand perspective is to retain that brand within the institution. Yeah.
I was it was hosted in the in Brewan Learn, and then we had all the other resources on the actual on our, like, public facing website. And I think we've got just one minute left or we're right at time. Is there any final question that we can answer? Yeah. Did you require faculty to do training? The question is did we require faculty to do training? I'm looking at my colleagues from the team here, and I the the answer is no. Strongly encouraged It was hard to work with our campus leadership to make anything mandatory.
We did have designated folks in all the academic units as leads. That were responsible for liaisoning and bringing things back into often times. There were trainings that we did within faculty meetings. So bringing our trainings to them. In terms of asynchronous trainings and kind of milestones to gain access into systems, no.
Did not do that. Well, thank you all. You are a great audience. Please feel free to reach out to us if you have any questions.