AI in Education: Tool, Transformation, or Trouble?
This is a recap of a session entitled from the 2025 ASU+GSV Summit, including insights from 爆料公社鈥檚 David Morales.
Artificial intelligence is reshaping educational technology in profound ways, but how can we ensure that AI truly serves learners and educators rather than becoming a flashy add-on or, worse, a source of harm? This article dives into the thoughtful integration of AI in education, drawing on the expertise of leaders from Western Governors University, Quill.org, BrainPOP and MetaMetrics. Their insights highlight the promise and pitfalls of AI in education and offer a roadmap for building meaningful, ethical and impactful AI-powered learning tools.
Understanding the Role of AI in Education
AI in education is not just about embedding the latest technology into existing products. It鈥檚 about solving real problems faced by students and teachers through innovative, reliable and equitable solutions. This grounding principle ensures that AI serves a clear educational purpose and delivers value to learners.
From kindergarten classrooms to higher education, AI is being applied in diverse ways, whether as a tool for students, in support of teachers or by powering frameworks for educational providers. The challenge lies in balancing innovation with safety, trust, and reliability.
How AI is Transforming EdTech Products
Personalized Writing Feedback with Quill
Peter Gault, founder and executive director of , a nonprofit focused on improving students鈥 writing skills, shared how AI can provide fast, fair, and effective feedback at scale. Writing is a complex skill requiring practice and nuanced feedback, which many students lack. Quill has been developing its own AI tool since 2018, emphasizing ethics and trustworthiness.
鈥淎 lot of students don鈥檛 get those opportunities to write and get enough feedback. AI can do an amazing job of evaluating writing, but we have a responsibility to make sure our tool is reliable and giving fair and equitable analysis of student work.鈥
Quill鈥檚 secret sauce lies in building custom datasets of authentic student writing paired with teacher-generated feedback. This approach allows them to fine-tune AI models that deliver reliable feedback directly to students in real time, without the need for a teacher to gate every response. As Gault explained, 鈥淵ou can send [large language models] up to a million words of context now. This is a real superpower: that we can customize AI with our own data.鈥
AI Literacy and Customized K-8 Learning at BrainPOP
Jay Chakrapani, chief product officer at , discussed their careful approach to AI, especially given their young audience (K-8). BrainPOP focuses on media, digital and AI literacy, helping kids learn joyfully while using technology responsibly. Their AI solution offers customized learning materials for both classrooms and home schools.
鈥淲hen you put something in front of a third grader, it can鈥檛 hallucinate. It has to be private and it can鈥檛 be wrong,鈥 said Chakrapani, who emphasized that the company spends a lot of time to ensure their AI solutions are safe for kids.
BrainPOP also recognizes the importance of maintaining high standards and human oversight. For example, their AI-generated grading suggestions are always reviewed and approved by teachers before reaching students, ensuring accuracy and appropriateness.
Measuring Reading Growth with MetaMetrics
Jing Wei, VP of machine learning & engineering at MetaMetrics, shared the long history of AI in education, highlighting the Lexile Framework for Reading 鈥 a pioneering AI-driven tool the company developed over 40 years ago. Lexile uses machine learning to match students with reading materials at the right difficulty level, optimizing reading growth.
MetaMetrics emphasizes transparency and educator support, providing training and clear explanations about what Lexile measures can and cannot do. Jing stressed the importance of maintaining high-quality standards and openly sharing research results with the education community.
鈥淲e created a large corpus of millions of texts... and used natural language processing and machine learning to build a readability model... 35 million students receive Lexile measures each year,鈥 said Wei. 鈥淲e want to be extra cautious in terms of the LLMs we are using, because ultimately we are trying to solve an education problem; we are not just trying to build AI for AI鈥檚 sake.鈥 聽
AI-Driven Personalization at 爆料公社
David Morales, CIO and senior vice president of technology at 爆料公社 described how higher education can leverage AI to personalize student experiences from program selection to graduation. 爆料公社鈥檚 AI initiatives focus on three key goals: attainment, return on investment, and equitable access.
爆料公社 is developing a 鈥渄ecision intelligence framework鈥 that dynamically adapts services and learning pathways based on individual student data, including geographic and cultural context. This personalized approach aims to improve student success and social mobility.
鈥淚f I know that my student is in Texas and I know they are coming from this specific zip code, I already know and understand a little bit more about their background,鈥 he said. 鈥淗ow do we enable that information to serve better that individual, to truly implement services and learning mechanisms for them, not for a standard consumption of education?鈥
Building Trustworthy and Ethical AI in Education
Across all organizations, a central theme is the responsibility to build AI systems that serve a specific student need. AI solutions must also be trustworthy, reliable and equitable. This requires significant investment in research, development, and continuous evaluation.
鈥淲e hired dozens of teachers and had them manually score thousands of outputs to train the AI,鈥 said Chakrapani. 鈥淲hen it鈥檚 out in production, we still have a teacher gating the feedback before it goes to the students, being the quality control gate.鈥
Gault echoed the value of custom datasets and human-in-the-loop evaluation.
鈥淲e have a team of teachers doing this at scale, he said 鈥淲e know we have good accuracy because we built those data sets in advance.鈥
Morales underscored the need for clear communication with students about the data used.
鈥淒efine the outcomes, not the outputs,鈥 he said 鈥淢ake sure you create an operational process that enables you to track, measure and align to outputs such that the outcomes are correct. Make sure your students know what data you have and what data you鈥檙e using to serve them better.鈥
Evaluating Impact: Beyond Accuracy to Real-World Outcomes
Traditional long-term efficacy studies often take years, which is impractical given the rapid evolution of AI technologies. Quill is pioneering 鈥渞apid cycle evaluation,鈥 inspired by pharmaceutical research phases, to quickly assess AI tools before scaling.
鈥淚s this helping or is this harming?鈥 asked Gault. 鈥淏y doing these phase one trials, you can really see鈥攊s this working or not?鈥
Chakrapani described a layered approach to evaluation of the effectiveness of their product at BrainPOP, combining AI usage analytics, classroom observations, and longer-term efficacy studies aligned with state assessments.
Jing proposed a holistic validation framework inspired by educational assessment theory, incorporating accuracy, reliability, impact, and practicality to ensure AI solutions truly serve educational goals.
Gault stressed the scale of human oversight required for trustworthy AI.
鈥淥ur team of seven former educators review around 100,000 sentences per year manually,鈥 he said. 鈥淭he question is not if there is a human in the loop, but how many humans and how much evaluation is being done.鈥
Morales pointed out the need to question existing processes before automating them with AI.
鈥淟et it not be automating a process that must be sunsetted,鈥 he said. 鈥淩eally figure out if that process should exist, and if it doesn鈥檛, then how is AI enabling me to remove that process such that I鈥檓 really moving the needle for the outcome that I鈥檓 looking for?鈥
Key Takeaways for Building Meaningful AI-Enabled EdTech
Build your own datasets: Customize AI models with authentic, high-quality data that reflects your unique educational context.
Invest in rigorous evaluation: Continuous monitoring, human review, and rapid cycle testing are essential to maintain quality and trust.
Focus on outcomes, not just outputs: Measure real-world impact on student learning and success, not just technical accuracy or feature delivery.
Be transparent and ethical: Clearly communicate with learners and educators about data use and AI processes to build trust.
Reimagine processes: Use AI to innovate and transform education, not simply automate existing workflows.
Conclusion
Morales referenced a quote from Oren Harari, who said, 鈥淭he electric light is not coming from the iteration or continued integration of candles.鈥 AI in education is not about patching old methods but about illuminating new possibilities for personalized, equitable and effective learning experiences. 聽
AI in education presents both tremendous opportunities and significant challenges. The path forward requires intentionality, ethical design, and a relentless focus on the learner鈥檚 needs. By combining cutting-edge AI technology with deep educational expertise, transparency and continuous evaluation, we can harness AI鈥檚 potential to transform education for the better.
As the landscape evolves, educators, technologists, and policymakers must work together to ensure that AI tools empower every learner, support every teacher and uphold the highest standards of quality and equity.