Resources for Responsible AI in Education

Trusted guidance, research, and practical tools to support responsible AI decision-making in schools, aligned with safeguarding and national guidance.

National Guidance and Policy

Key UK guidance shaping responsible AI use in education, including DfE policy, safeguarding requirements, and statutory responsibilities for schools.

Official guidance on generative AI, outlining expectations for use, oversight, and risk management in schools.

Statutory safeguarding guidance for schools and colleges, relevant when any digital tool intersects with learners.

Overview of legal responsibilities under data protection law when processing pupil/staff data.

Safeguarding, Privacy & Data

Guidance on safeguarding, GDPR, and data protection to help schools manage AI-related risks and protect pupils, staff, and public trust.

Explains how AI intersects with data protection requirements under UK law.

A practical guide for assessing risk when introducing new technologies.

Supports safe, secure handling of digital systems across school networks.

Leadership, Governance & Professional Judgement

Resources for school leaders and governors on AI oversight, accountability, and professional judgement in educational decision-making.

Support for governors on strategic oversight, including digital risk and innovation governance.

Leadership perspectives, including risk and strategy around educational innovation.

Understanding how senior leaders demonstrate safe, effective practice, including digital use.

Evidence & Research 

Accessible summaries of research and evidence on AI in education, including benefits, limitations, and risks relevant to schools.

EEF tech evidence

Supporting safe, ethical and appropriate approaches to AI within education settings.

Literacy Trust

Helping educators understand practical uses that align with professional judgement.

Ofqual

Testing approaches through small-scale pilots to learn what is genuinely helpful.

Responsible AI & Ethics

Independent ethical frameworks and principles supporting transparent, fair, and responsible AI use in education and public-sector contexts.

Independent research and ethical frameworks relevant to responsible AI considerations.

Research on fairness, bias, and societal impacts of AI framed for non-technical audiences.

UK advisory body shaping responsible AI use in public settings.

Practical Tools for Schools (IAIE)

IAIE-developed checklists, frameworks, and templates to help schools make safe, defensible decisions about AI use.

IAIE Checklist

Supporting safe, ethical and appropriate approaches to AI within education settings.

IAIE Questions

Helping educators understand practical uses that align with professional judgement.

Ofqual

Testing approaches through small-scale pilots to learn what is genuinely helpful.

FAQs

The Institute of AI in Education supports the responsible and evidence-informed use of AI across education.

No. IAIE does not sell AI tools; instead, we focus on providing guidance, conducting research, and promoting good practices.

Schools, educators, researchers, funders, and public bodies.

Yes. Our work aligns with the UK Department for Education guidance on AI, safeguarding, and data protection.

AI can be used safely when appropriate safeguards, transparency, and professional judgement are in place.

Schools and organisations can contact IAIE to explore pilot partnerships and collaboration.

Have a question for IAIE?

Educational resources

Practical AI guidance for educators and education leaders

Responsible AI in Education

Working with educators and partners

Start a thoughtful conversation about AI in education

If you are exploring responsible approaches to AI, we welcome conversations with schools, educators, and partners interested in careful, evidence-informed work.