Protecting children online in the age of AI: Europe needs regulation, education and literacy
Bojan Kordalov, Director of Policy and Communications, European Centre of Excellence (ECE Brussels)
This is a guest post originally written for ITLogs.com by Bojan Kordalov, Senior Expert in Communications, Advocacy, and Digital Literacy. He has over 20 years of experience in visibility, public relations campaigns, and strategic consultancy. Bojan is currently serving as a Director for policy and communication at the European Centre of Excellence (ECE) in Brussels.
In recent weeks, the European public debate has been sharply focused on new proposals to restrict children’s access to digital platforms, including calls for bans on social media for minors, as part of broader efforts to protect youth online.
France, in particular, has brought forward new initiatives aiming to limit access to platforms for those under 15. Similar discussions are taking place across the EU, as member states and Brussels institutions consider how best to address growing concerns about children’s exposure to harmful content and the influence of algorithms.
As someone actively following these debates, and having recently been asked for expert opinions on the topic in both media and policy discussions, I strongly believe this is a crucial moment for Europe to reflect and act wisely.
Having worked in strategic communication, digital literacy and advocacy for over 20 years (from the early days of online forums, social media to the current rise of AI) I see clear parallels with past mistakes we should avoid.
“My core point remains consistent as in the past decades: Bans alone will not be enough and often will show as contra productive.”
Europe must (and I strongly believe that will) develop a smart, comprehensive approach that protects children while upholding the very values that define the Union, thus creating a new, innovative European approach in ethical use of AI and full digital literacy from very early ages.
But what other lessons have or haven’t we learned from the short but intense history of social media usage?
Regulation alone is not enough, especially taking into consideration that in the case of Social media, the regulation came late and that media literacy was not systematically embedded. Public understanding of how algorithms shape content arose as a burning issue even during the COVID 19 pandemic and as a result, disinformation, privacy violations and polarisation became widespread challenges (ones we are still addressing, even increasingly).
In this direction, I would stress again that we cannot afford to follow the same path with the Artificial Intelligence. Today’s AI technologies (from generative AI to algorithmic feeds) already influence what children see, how they learn, and how they interact with the world. The risks of manipulation, bias, and deepfake content are very real and already everyday reality. But simply banning access will not solve the problem. It may push children into unregulated spaces, and further undermine trust in democratic governance.
What Europe should do: 5 essential pillars to focus on:
Smart, privacy-respecting regulation (instead of blanket bans)
The DSA and the AI Act provide a strong legal foundation. Bans alone can sound attractive and to be the solution, but they are not sustainable. They risk driving children towards VPNs and unsafe alternatives (as we saw in France with recent adult content restrictions), eroding privacy (if intrusive age verification becomes the norm) and directly contribute towards fragmenting the European digital space and undermining harmonisation.
Europe should instead implement:
Transparent, proportionate age verification where strictly necessary and productive (results-oriented approach);
Improving the privacy-preserving standards thus avoiding a “closed digital world.”
Systematic AI and media literacy approach
The best and sustainable protection for young people and for the societies is education and critical awareness. We must make AI and digital literacy a pillar of education but in a practical way and through practical content. These should include teaching children how algorithms and AI shape content, promoting critical thinking and resilience against manipulation, but at the same time ensuring that parents, educators and youth workers are equipped to support this learning.
Please note that without this, even the best regulation will fall short!
Inclusive governance and coordination process
The new European model must be one of most inclusive, democratic governance. In that sense, the AI policy affecting children must be developed in full coordination with the civil society, digital and AI child protection experts, youth representatives, the media and academic experts, educational sector etc.
Official Brussels has shown many times a positive example of inclusiveness and should further lead in demonstrating that AI governance can be both ethical and participatory.
Strong accountability for tech companies
Tech companies must be continuously held accountable for how their platforms impact children. This should include: conducting regular audits of algorithms for bias and harmful outcomes, ensuring transparency of AI systems, but also providing effective and child-friendly parental tools and user empowerment features.
This is the area where further regulation is always needed. That means, this accountability from the big tech companies must be required and enforced, not by any case left to voluntary codes.
Empowering children/youth, their parents and educators
Minors and their families, teachers and youth workers must be empowered and regularly given support, but also a recognition for all they do to protect the children and the whole society in this internet and AI era. These need to include practical resources on AI and online safety (updated regularly), guidance documents on supporting children’s digital literacy, as well as supportive networks to address emerging challenges.
This is a shared societal responsibility, but institutions should bear in mind that drafting and preparation of this materials should be hand-by-hand done with these stakeholders.
Europe today has a chance to set a global example of how to protect children online without sacrificing privacy, education or democratic values. The AI Act and DSA provide the tools, but the real success will depend on how will complement regulation with education, public debate, and inclusive governance. The European Commission’s current balanced approach deserves strong support from all member states, civil society and candidate countries alike.
As someone who has witnessed the evolution of the internet, social media and AI firsthand, I think we can make sure the following: If we delay, the risks will outpace our ability to respond. If we act wisely and all together, we can build a digital future that empowers (not endangers) our youth.
This is all about child protection. But, it is also about ensuring that Europe remains a global beacon of responsible, human-centred digital governance.