As artificial intelligence continues to transform school life – saving teachers time, enriching learning experiences, and enabling tailored student support – it’s vital to match that excitement with robust safeguards, clarity, and intentional governance. Recent updates to Keeping Children Safe in Education (KCSIE) 2025 and the Online Safety Act now explicitly require schools to do more than block harmful content—they must proactively manage AI-related risks.
1. Safeguarding Now Includes AI
- KCSIE 2025 highlights AI for the first time, urging schools to review and upgrade their filtering and monitoring systems. This means ensuring AI-generated outputs don’t slip through the cracks or seem credible when they’re not. DSLs should stay alert, update policies, train staff, and collaborate with edtech providers to keep systems compliant. More information.
- The Online Safety Act imposes new legal duties: platforms (including bespoke educational tools) must deploy “proportionate measures” to keep students safe from harmful, inappropriate, or misleading AI outputs. This includes ensuring transparent reporting processes for quick intervention. More information.
Key Questions to Ask Providers:
- How is your AI tested against harmful outputs?
- Are safeguards built in or just added as an afterthought?
- Is there a clear, user-friendly reporting mechanism?
2. Four Quick Lenses for Evaluating AI Tools
When introducing AI in your school, evaluate tools using these practical criteria:
- Data Safety – Understand what personal data is collected, how it’s stored, and for how long.
- Age-appropriateness – Make sure the tool uses suitable language and aligns with the curriculum.
- Human Oversight – Prioritise systems where teachers can review AI outputs before they reach students.
- Transparency – Select tools that explain how they work, their limitations, and the data they use.
3. Keep Teachers at the Helm
AI can’t replace teachers – it must support them. Real success comes from technology that complements pedagogical goals, not overrides them. Teachers should be involved from procurement to pilot testing and evaluation, ensuring tools remain responsive, relevant, and aligned with student needs.
Why RockettAI is Keenly Positioned to Help
At RockettAI, our mission is to support educators with AI that’s not just powerful – but also safe, transparent, and tailored for the classroom. Here’s how we align with the latest expectations and help schools stay ahead:
- Embedding Safety by Design
From robust filters to teacher approval workflows, our tools are engineered with safeguarding at their core.
- Clear Transparency & Data Privacy
We provide full clarity on data use, model behaviour, and limitations – so you always know what’s happening behind the scenes.
- Designed with Teachers, for Teachers
Educators are integral to our development process – from shaping prompts to vetting outputs – ensuring every tool supports learning, not distracts from it.
- Strategic Guidance & Training
Beyond the tools, we offer training and guidance to help you confidently evaluate AI, train staff, and engage stakeholders effectively.
In Summary
The AI revolution in education offers incredible potential – but it comes with real responsibilities. With new regulations like KCSIE 2025 and the Online Safety Act now in place, schools must adopt AI tools that are not just innovative, but safe, transparent, and well-governed.
RockettAI stands alongside educators in navigating this landscape-empowering schools to harness AI confidently and ethically, while keeping students front and centre.