AI: Will we repeat the lost time and fear, as with social media?
Bojan Kordalov, Director of Policy and Communications, European Centre of Excellence (ECE Brussels)
This is a guest post originally written for ITLogs.com by Bojan Kordalov, Senior Expert in Communications, Advocacy, and Digital Literacy. He has over 20 years of experience in visibility, public relations campaigns, and strategic consultancy. Bojan is currently serving as a Director for policy and communication at the European Centre of Excellence (ECE) in Brussels.
Digital revolutions never come with an instruction manual. I clearly remember the moment when social media started taking over the world two decades ago. Those of us who embraced it early – mainly young people eager for global connectivity, rapid access to information, new communication opportunities, and the elimination of physical distances – were filled with excitement. The ability to publish something in real time, receive instant feedback, and engage in genuine two-way communication was revolutionary and transformative.
However, the prevailing sentiment among the majority was fear and distrust, driven by concerns that this new reality would push them out of their comfort zones and require adaptation. They were right – but I still firmly believe that the benefits of social media outweigh the drawbacks, which can be mitigated through regulation, education, resilience, and multi-stakeholder cooperation.
And yet, let’s be honest: history has shown us that soon after social media became widespread, inadequate regulation, a lack of institutional capacity to enforce existing legislation in the digital sphere, and insufficient awareness of the risks led to situations where algorithms began to shape public debate, fuel polarisation, and erode trust in facts.
Today, it is expected that the same dilemma arises with artificial intelligence (AI). This raises a logical question: Will we once again wait for the consequences to overtake us before we act?
What lessons have we learned from social media?
Social media is one of the most revolutionary developments in human history, offering communication, educational, professional, and networking opportunities we never imagined possible. However, ignoring its potential and power, failing to build digital and media literacy skills at an individual and societal level, delayed regulation, and institutional unpreparedness have led to significant societal consequences including misinformation, breaches of personal data, and various forms of attempts for manipulation of public opinion.
We now face a similar situation with AI – a technology additionally transforming the way we work, communicate, and learn. The key difference this time is that we have the experience, thus the chance to act wisely and proactively.
And another questions are: Will we learn from our mistakes? Will we act swiftly, in a timely manner, and through a fully democratic process to regulate, educate, and prevent harm? Or will we, once again, wait for things to spiral out of control before responding? The truth is, if we delay action this time, it may be impossible to correct the course later.
What are the next steps for the EU as a global leader?
As a senior expert in communications, advocacy, and digital literacy, engaged in Brussels by the European Centre of Excellence, I closely follow discussions in EU policy circles regarding AI. The European Union has already taken significant steps through the AI Act, setting standards for ethical and safe use.
However, regulation – no matter how important – is sometimes not enough if users do not understand how AI works and how it shapes their decisions.
Without stronger digital and media literacy, there is a risk that AI could become a new tool for disinformation, manipulation, and loss of control over data. If social media algorithms have taught us anything, it is that a lack of awareness leads to chaos.
The key to a responsible digital future
Unlike social media, where regulation only emerged after significant problems had already developed, we have the opportunity to establish ethical frameworks for AI almost from the outset. But ethics in AI is not just a question of legislation – it is an individual and societal responsibility.
To discuss individual and societal responsibility, we must first return to a fundamental question: What is the relationship between citizens and institutions? Are people, organisations, and businesses satisfied with the public services provided by local, regional, and European institutions?
Here are some key aspects that require our attention:
Transparency – Users must be aware when they are interacting with AI, whether texts, images, or videos are AI-generated, and whether the sources of information are reliable, accurate, and credible.
Accountability – Companies, media, and public institutions must ensure the ethical use of AI, avoiding manipulation and misuse. This is particularly critical for tech companies developing AI-based software and platforms, as well as for the major social media platforms and their service providers.
Bias (or the lack thereof) – AI is trained on datasets that may be subjective or biased. If left unchecked, AI could replicate and reinforce existing societal inequalities. Additionally, who owns the information: What about the copyright issues? And how protected are we from radical threats, such as deepfake videos?
Personal ethics – Every individual using AI—whether for work, communication, or content creation—has a responsibility to verify information and ensure they do not contribute to manipulation. Crucially, AI must not replace human decision-making, particularly when it comes to final decisions, as responsibility remains with humans, not computers, algorithms, or robots—at least for now.
How can we avoid repeating the same cycle?
Education over fear – Instead of viewing AI as a threat, we should approach it as a tool that requires knowledge for proper use. Digital literacy must be embedded in education systems but also promoted through continuous learning for adults. Let’s remember that when we first encounter a new technological device, we usually read the manual or watch tutorials, and then start using it. Each of us should apply the same approach to AI.
Effective regulation – Well-written rules and legislation are not enough – they must be understandable, accessible, and applicable across all sectors. However, we need to start with high-quality regulation that safeguards individual freedoms and rights while reinforcing and protecting the democratic framework in which we live.
Active public debate – AI development must not take place behind closed doors. Civil society, the media, and experts must be actively involved in shaping policies. This is particularly crucial for policymakers and decision-makers across Europe, as ensuring AI serves the needs of people is only possible through an inclusive process where everyone has a stake and a voice. Otherwise, if regulations are crafted in isolation within government offices, AI may soon control people needs, which would be humanity’s greatest failure.
Personal responsibility – Each of us must develop critical awareness, which means not only verifying sources but also understanding how algorithms function, evolve, and influence us daily. This is particularly important for parents, educators, and those working with young people, as the foundation of digital literacy is laid at home and in educational institutions.
Another crisis before we act?
Social media provided unparalleled opportunities, but because we failed to critically assess its impact early on, the world is now grappling with its unintended consequences. With AI, we have no excuse to repeat the same scenario.
The EU is laying regulatory foundations and moving in the right direction, but the real battle will be fought in the realm of knowledge, awareness, and preparedness. And here, we all have a role to play—as allies and supporters of a responsible digital future.
This time, instead of losing time to fear and reactive policies, we must ensure that AI works for us, not against us.