Most schools have an online safety policy. The question worth asking is when it was last meaningfully updated — not just reviewed but genuinely rewritten to reflect the digital world children are navigating right now.
In a recent podcast about how safeguarding is changing due to AI, Lucie Welch, Advisor at Services for Education and a former DSL with fifteen years of experience in primary schools, was direct about what she finds when she reviews school policies. Online safety documents are commonly still written around the risks of five years ago: reminders not to share passwords, warnings about talking to strangers and basic definitions of cyberbullying.
While those aren’t obsolete reminders, they’re nowhere near sufficient for what schools are dealing with today. The gap between those policies and the risks children face has become significant and continues to widen.
Common Risks Students Experience That Need Guardrails
The internet safety concerns that existed when most online safety policies for schools were last written bears little resemblance to the digital landscape schools are facing now. A policy that makes no mention of AI-generated images, deepfakes, generative content or companion chatbots is now insufficient.
Consider what that gap looks like in practice. An ordinary photograph of a Year 9 pupil is shared without their knowledge. Using freely available AI tools, another student generates explicit imagery from it.
This is not a hypothetical. Schools and pupils are already encountering exactly this. The safeguarding response by schools is as complex as questions around evidence, disclosure, referral and the specific guidance that applies when an image has not been directly shared but fabricated. A DSL reaching for an online safety policy built around password hygiene will find precious little to guide them.
As Welch puts it: “We know how we respond if there is a disclosure of sexual abuse. We know how to respond if we recognise the signs of neglect. How are we responding to this AI-generated harm?”
That question deserves a written answer in every school and right now, many do not have one.
AI companion apps present a different but equally pressing challenge. These are chatbots designed to respond, learn from conversation and build ongoing relationships with users. Some pupils are forming real emotional attachments to them. Unlike regulated platforms, these tools operate without meaningful oversight of what these tools say or recommend, and there is a growing body of evidence that some have reinforced harmful behaviours, including self-harm and disordered eating. A pastoral lead who encounters a pupil distressed by an exchange with an AI “friend” needs a framework to work from. Most schools do not yet have one and need to address it in their online safety policy for AI.
How to Implement AI-Based Online Safety Policies
Getting policy right in this space won’t happen perfectly overnight as it requires an honest reckoning with where things currently stand and a genuine commitment to keeping pace. The place to start an AI safeguarding policy is not with a template but with a question: does this document reflect the risks children in this school are actually facing, right now?
Welch says, “I think because AI has crept in so fast, people aren’t catching up. So it’s looking at the risk assessment. It’s looking at making sure your policies have mention of deepfake, of generative images, of chatbots…and actually reviewing and adapting those policies regularly and not always waiting for statutory guidance.”
The two-to-three year policy rollover that suited a slower-moving landscape is no longer adequate. Online safety policy needs to be treated as a live document and updated when the world changes, not when the calendar prompts a review. There are four areas schools can take action to update their online safety policies for AI.
Identify and Document Actual Risks
An online safety policy for AI that’s fit for purpose today needs to explicitly address things like AI-generated imagery and deepfakes, generative content tools, AI companion chatbots, and algorithm-driven radicalisation and misinformation. These are not edge cases. They are the risks presenting in schools right now, and a policy that does not name them cannot meaningfully guide a response when they arrive.
A DSL should be able to open the policy during an incident and find language that speaks directly to what they are dealing with. Ideally, the policies should be clear rather than leaving language that requires significant interpretation to apply.
Commission a Thorough AI Risk Assessment
An assessment is distinct from an online safety policy review. A risk assessment asks a more specific set of questions:
- What AI tools are pupils and staff actually using, both in school and at home?
- Which platforms offer meaningful safeguards and which are open to the whole internet?
- Where are the gaps between what the school thinks is happening and what is actually happening?
- How can we adopt AI in schools without unnecessarily exposing students to risk?
Many schools simply do not have this picture, and without it, policy decisions are being made in the dark. The draft for the Keeping Children Safe in Education (KCSIE) 2026 guidance, expected to take effect in September 2026, makes AI risk assessment an explicit expectation. Schools that treat it as a serious exercise now, rather than a compliance hurdle later, will be better placed when that guidance becomes statutory.
Build a Response Protocol For AI-Generated Harm
Schools have clear, practised safeguarding procedures for responding to a disclosure of abuse or for recognising the signs of neglect. Most do not yet have an equivalent framework for AI-generated harm, which is an essential component to add.
Consider the specifics: If explicit AI-generated imagery of a pupil comes to light, what happens next? Who is informed, in what order, and on what timeline? The standard guidance of “do not share the image; do not ask to see it” applies, but the context is different when the image was fabricated rather than taken.
A written protocol, agreed in advance and understood by the safeguarding and wider school leadership teams, is considerably more useful than improvising a response under pressure.
Increase Filtering and Monitoring and Verify Frequently
Compared to the 2025 KCSIE policy, the KCSIE 2026 draft extends existing expectations, requiring governing bodies to review the effectiveness of filtering and monitoring at least annually, with documented checks that systems are functioning correctly across all internet-connected devices.
This matters because schools frequently discover during audits that systems they believed were blocking certain content were not. Having a system in place is not the same as knowing it works. That distinction is worth building into the review cycle explicitly.
Who Should Be Involved in Writing AI Policies
There is a tendency in schools to treat AI as an IT concern, or to pass it entirely to the DSL. Neither works well on its own, and the draft KCSIE 2026 is unambiguous on this point: responsibility cannot sit with a single role or department.
Developing a meaningful AI safeguarding policy will require different kinds of expertise.
- Designated Safeguarding Leads understand risk, behaviour and the indicators that something has gone wrong.
- IT staff understand the systems, the filtering process and the procurement
- decisions.
- Curriculum leaders shape how AI is taught and discussed.
- Pastoral staff are often the first to notice when a pupil is struggling with something connected to their online life.
- School leaders hold the strategy and the resource.
- Governors have statutory oversight responsibilities and a unique perspective.
As Welch observes: “It’s about shared systems, shared language, shared responsibility — embedded across all areas of the school. It can’t be confined to one department.”
A practical first step for schools still finding their feet on this issue is to convene a working group drawing from each of these areas. They can take stock of where policy, systems and practice currently stand, name the gaps plainly, and agree on a realistic path forward.
Extending that conversation to include parents and, where appropriate, pupils themselves tends to sharpen the picture considerably. The reason is that children are often well ahead of adults in knowing exactly what they are using and why. Additional perspective and understanding can help school leaders build a robust online safety policy for AI that works.
Keeping Up With Technology Changes Is Essential
While there may be understandable instinct to wait until the KCSIE 2026 guidance is finalised, or until there’s more clarity about where technology is evolving, schools can’t afford to wait as risks increase each day. Pupils are encountering AI-generated content, deepfakes and unregulated companion tools today.
A policy written for 2022 is already outdated due to how quickly technology and AI have evolved faster than most institutions could have reasonably anticipated. But the gap between where policy sits and where risk sits is something school leaders can begin to close without waiting for external instruction to do so. A proactive approach can help schools feel they’re more prepared to safeguard students even as AI evolves.
To learn more about safeguarding practices that capture low-level concerns and help prevent crises through robust, centralised documentation, explore CPOMS StudentSafe.