Insights from the community workshop on explainable AI in education

The European Digital Education Hub hosted a workshop on explainable artificial intelligence (AI) in education on 17-18 October 2024 in Brussels. The event brought together 31 AI experts from diverse backgrounds in education and policymaking to explore the challenges and opportunities of explainable AI systems in classrooms and to develop actionable recommendations.

Explainable AI (also referred to as XAI) in education refers to AI systems that reveal how certain decisions and recommendations are made in educational contexts. Unlike traditional AI, which can be a ‘black box’ with hidden processes, XAI reveals its decision-making steps. 

This transparency helps students, educators and administrators to better understand the AI’s reasoning. By making AI decisions more transparent, XAI can support fairer, more effective, and inclusive educational experiences. 

Key insights from the workshop

During the workshop, the participants explored the implications of explainable AI in education. They used real-life cases to test explainability strategies and refine the integration of AI systems in education settings.

The discussions covered

  • the regulatory framework of the AI Act
  • the importance of ethics 
  • the need for AI literacy 

and were followed by proposals such as

  • developing an AI literacy framework 
  • adapting the EU’s SELFIE tool to measure AI transparency 

Policy recommendations for integrating explainable AI in education

The key result of the workshop was a set of policy recommendations

  1. Clarifying accountability in the use of AI
    There is a need for clearer accountability guidelines for AI systems in education. Simplifying complex legal terms and explicitly outlining responsibilities will help educators and students with the use of AI tools. 
  2. Setting standards and explainability scores
    Creating an “explainability score” for AI tools could help educators choose systems that are transparent and understandable. Setting such standards would ensure that AI systems are aligned with educational goals and tested for transparency.
  3. Promoting AI literacy
    There is a need to build AI literacy into the education system, equipping both educators and students with the skills to critically engage with AI tools. Policymakers should provide resources and funding for professional training to increase AI literacy.
  4. Designing ethical, human-centred AI
    AI systems should be built on the principles of fairness, transparency and inclusivity. This will ensure that AI tools are designed to support the diverse needs of educators and students.
  5. Encouraging multi-stakeholder collaboration and co-creation
    Partnerships between educators, technology developers and policy makers are essential to co-create AI tools tailored to educational needs. 
  6. Allocating funding for research and development
    Governments should increase investment in research on AI applications to ensure continuous innovation and adaptation of AI systems in education.