Decorating Learning or Improving It? Rethinking AI Use in International Schools
How international schools can move beyond AI policy debates toward shared understanding, safe experimentation, and responsible practice across the whole community.
As international schools return from winter break, one conversation is happening almost everywhere.
From staff meetings to well-being check-ins and leadership conversations, the same questions keep surfacing:
- Do we need an AI policy, or can existing integrity and responsible-use guidelines be adapted?
- How do we protect student data and ensure it is not used to train AI systems?
- What limits are needed for AI companions and chat tools to avoid dependency?
These are important questions. But after several months of living with AI in real classrooms, one insight is becoming clear across many international schools: the challenge is not writing a policy; it is aligning the community.

Policy Is a Starting Point, Not the Solution
AI policies and responsible-use guidelines are necessary. They help clarify expectations, reduce uncertainty, and signal that schools are taking AI seriously.
However, when policy conversations move faster than shared understanding, misalignment grows.
Teachers may interpret guidelines differently. Parents may hear one message at school and another in the media. Students may follow rules without understanding them, especially when AI expectations vary between teachers or subjects. In these moments, even well-written policies struggle to work as intended.
When these perspectives are not brought together, schools move in parallel rather than in partnership. Alignment, not enforcement, is what allows responsible use to take root.
From Control to Shared Understanding
In response to uncertainty, many schools initially rely on top-down rules to manage risk. While understandable, compliance-first approaches often struggle to reflect classroom reality. When AI is framed mainly as something to control, trust weakens, innovation slows, and confusion grows across teachers, students, and families.
Over time, schools are learning a key lesson: alignment must come before documentation. Those making the most progress begin not with new rules, but by revisiting their mission and values, clarifying what learning and integrity mean in an AI-rich world, and openly acknowledging how AI is already being used by students and staff. From this shared foundation, guidance becomes clearer, more practical, and more widely trusted.
Shared learning moments, such as Hours of AI, staff workshops, or school-wide discussions, play an important role in this shift. They move the conversation from “Is this allowed?” to “Why are we using this, and how does it support learning?” and help build a common language across age groups and roles.

These conversations also surface an important distinction. While AI can generate polished visuals or graphs, its greater potential lies in deeper work: supporting safe data use, refining lesson design, improving feedback, and helping educators work more efficiently and effectively. Responsible use, then, is not about banning surface-level applications, but about helping students and staff clearly distinguish between AI that decorates learning and AI that genuinely improves it.
Exploring AI Safely, Transparently, and Together
Beyond alignment, schools are learning that responsible AI integration requires intentional exploration. This work cannot be reduced to rules alone. Schools need safe spaces to try, reflect, adjust, and sometimes stop practices that do not serve learning.
In practice, safe exploration often looks like:
- piloting AI use in limited contexts before scaling
- making expectations explicit to students about when and why AI is used
- documenting what works, what does not, and why
- pausing practices that reduce thinking or student agency
- regularly reviewing decisions with staff, students, and parents
Transparency is important throughout this process. When schools clearly explain how and why AI is being used, expectations become clearer and trust increases. Students learn that AI is not a hidden shortcut, but a tool whose benefits and limits must be understood.
Equally important is holding opportunities and concerns in view. AI can support access, language development, and efficiency, but it also raises questions about dependency, bias, data privacy, and cognitive effort. Naming these tensions openly helps communities make more thoughtful decisions.
This work cannot sit with teachers or leadership alone. Office staff, learning support teams, IT staff, parents, and students all interact with AI in different ways. Involving the whole school community ensures that responsible use is not theoretical, but lived through daily practice.
From Policy to Culture
AI integration is not ultimately about producing a document. It is about building a culture of trust, clarity, and responsible exploration.
When communities are aligned, policies support learning rather than constrain it. As schools continue this work, the guiding question remains:
How do we lead AI responsibly by bringing our whole community together?
About the Author
Mariano is an international educator and Head of Design with experience leading technology and AI integration in international school contexts. His work focuses on AI literacy, responsible innovation, and the design of learning experiences that keep pedagogy, wellbeing, and community alignment at the centre.









