With the release of OpenAI’s ChatGPT late last year, artificial intelligence (AI) has exploded into the public consciousness. ChatGPT and other generative AI tools are clearly impacting the future of education. These tools can be an effective support in delivering a personalized learning experience for students while saving educators time.
According to Graduate School of Education Dean Daniel Schwartz of Stanford University quoted in AI Will Transform Teaching and Learning. Let’s Get it Right, “Technology offers the prospect of universal access to increase fundamentally new ways of teaching.” As a result, the adoption of AI systems in the edtech community is rapidly increasing and this raises important questions about ethics, equity, reliability, privacy, security, and risks in using AI.
At Instructure we believe in thoughtful, strategic, ethical implementation of new technologies. We are committed to partnering with our customers and the edtech community at large to develop a positive and responsible approach to the use of AI.
An important first step in this process was to align our internal governance to support the responsible development and use of AI in our products.
Our AI guiding principles serve as the foundation for our governance and approach.
- Instructure’s mission to amplify teaching, elevate learning, intensify impact, and inspire everyone to learn together will guide our approach to AI Systems.
- Student data privacy and security is always one of our core concerns.
- We encourage institutions to develop and share “Ethical AI Use Statements” for students.
- We encourage institutions to create guidance and training for teachers and students on AI.
- We are committed to ensuring academic integrity and working with our partners to prevent dishonest use of AI.
- Accessibility should be a primary concern. AI Systems should aim to “level the playing field” for all students.
- We do not use our customer’s data for AI Systems without their express permission.
We’ve also developed an AI governance policy to provide thoughtful and appropriate guardrails for our internal use of AI. These include:
- Responsible AI Use - We will use AI responsibly and ethically, avoiding any actions that could harm others, violate privacy, or facilitate malicious activities.
- Transparency and Accountability - We are transparent about the use of AI in our products and services, ensuring that stakeholders are aware of the technology's involvement in any decision-making processes.
- Bias and Fairness - We actively work to identify and mitigate biases in our AI development. We endeavor to ensure that these systems are fair, inclusive, and do not discriminate against any individuals or groups.
- Human-AI Collaboration - We recognize the limitations of AI and always use our judgment when interpreting and acting on AI-generated recommendations. We believe that AI should be used as a tool to augment human decision-making, not replace it.
- Training and Education - We are training our employees on how to use and develop AI responsibly and effectively.
- Privacy - Instructure will ensure that individuals will be protected from privacy violations through design choices that ensure privacy protections are included by default.
In addition to the AI governance program, we’ve committed to;
- Establishing an AI Governance Board that provides oversight for our use of AI;
- Designating an internal AI Officer to ensure compliance with our policies; and
- Joining the EdSAFE Alliance Global Pledge which advocates for the creation and implementation of global AI and education standards.
We’ll continue to provide updates as we hold policy sessions with customers and meet with partners developing in this area as well. We look forward to sharing more about AI with our entire community at InstructureCon in July.
In the meantime, I encourage you to check out these related published resources:
Related Content
- The Power of Formative Assessment 7 Ways It Can Benefit Learners.png
Blogs
- formative_assessment_1.png
Blogs
- live_from_educause.png
Blogs