When my friend and entrepreneur Elaine woke to find a defamatory TikTok video accusing her of scamming clients, she reported it, sought legal advice and contacted the Data Protection Commission (DPC).
Weeks passed. The video stayed online. No one was held accountable – not the person, not the platform. The DPC’s remit, it seems, is data inaccuracy, not defamation.
Elaine’s story reveals a more profound truth about Ireland – a country that has grown rich from the profits of the very companies it is supposed to regulate.
Ireland is now the European hub for nearly every major technology giant: Meta, Google, TikTok, Apple, Microsoft and X. Their taxes dominate the exchequer. Last year, corporation tax receipts exceeded €28 billion – roughly one in every five euros collected by the State. This year it is set to surpass €30 billion.
RM Block
That dependence has consequences. We call ourselves Europe’s digital capital, yet we’ve become its bottleneck for enforcement. The European Union’s other regulators complain of Ireland’s “extremely slow case handling”, lenient enforcement and endless deliberation.
On paper, the DPC has issued significant penalties; €310 million against LinkedIn and €1.2 billion against Meta, but these were not its preferred outcomes. In several cases, Ireland’s draft orders contained no fine until EU peers in France, Germany, Austria and Spain objected.
The European Data Protection Board (EDPB) overruled our DPC and instructed it to impose penalties.
The pattern is clear: when Ireland acts, it is often because Europe pushes. In one case, Ireland pushed back – actually suing the EDPB to block an order requiring further investigation into Meta. It lost the case and much of its credibility.
After scandals such as Cambridge Analytica, governments wanted to act. But the question became what to regulate, and Big Tech made sure the answer suited them. They embraced privacy laws, cookie banners and compliance forms and shifted the debate from how lies spread to how data is stored.
Privacy became the shield; identity remained the loophole. While regulators obsessed over paperwork, anonymous accounts and troll networks thrived. The danger was never surveillance; it was the collapse of accountability.
For 10 years we’ve been protecting data rather than protecting truth. GDPR made us feel secure but did little to stop defamation or industrial-scale misinformation. We built a bureaucracy instead of a backbone.
Tech companies claim that requiring users to verify identity would endanger dissidents and be impossible to enforce. It’s a clever argument – not entirely false, but also entirely self-serving. Anonymity fuels toxicity; toxicity drives engagement; and engagement drives profit.
Yet identity verification is neither new nor complex. Revolut, the online bank, has verified the identities of more than three million Irish users under money-laundering legislation. The process is mandatory and takes minutes, requiring photo ID and a short video or facial scan.
If a fintech company can verify millions of users to prevent fraud, then billion-dollar social networks can do the same to prevent abuse. This is not a radical idea but an existing principle: we already demand verification to protect our financial systems, but refuse to apply it to safeguard public discourse. The barrier isn’t technology – it’s our lack of will to legislate.
Ireland’s regulatory culture prizes procedure over principle. Whether on defamation reform, online safety or data protection, we prefer fairness and consultation – the language of neutrality that avoids confronting power.
That has served us economically. It has also served Big Tech perfectly. Now, with artificial intelligence racing ahead, we stand at the same cliff edge.
Social media’s power came from personalisation – recommendation engines shaping our attention. AI’s power comes from our personality – learning who we are and reflecting it back in human form, thereby shaping our emotions.
[ EU set to water down landmark AI Act after pressure from big techOpens in new window ]
When tech companies drive revenue from emotional imitation, from engagement that feels personal, then regulation can no longer remain procedural, it must become flesh – in other words, it must become law.
Europe’s AI Act, expected in 2026, is hailed as a breakthrough, but enforcement will again fall to national regulators, in practice, Ireland.
The law contains a crucial loophole: for “stand-alone high-risk AI systems”, where companies can self-assess their own risk. Their duties are mostly procedural, setting up risk management systems and producing documentation, in effect, marking their own homework. It’s GDPR all over again: ambitious in theory, timid in practice.
If the past decade was about privacy, the next must be about authenticity. Real AI regulation must rest on accountability, identity and independence.
Every major AI system should have a named, legally responsible individual – someone answerable for harm, bias or defamation, just as corporate and environmental law already requires.
Crucially, AI-generated voices, text and images must be clearly labelled and traceable. Counterfeit authorship should be treated as seriously as counterfeit currency.
We also need a single expert body capable of acting decisively. A portion of Big Tech’s tax receipts should fund a truly independent AI authority – separate from the Department of Finance and the DPC – with the power and resources to investigate, suspend and fine.
In sectors such as healthcare, education, media and justice, AI systems should face a full public-interest test – the digital equivalent of an environmental impact review. The greater a technology’s reach into human life, the higher the obligation to prove it serves the public good.
We missed the chance to shape social media before it shaped us. With AI there will be no second bite of the cherry. If we let profit logic outrun moral logic again, Ireland won’t just host the digital future – it will surrender it.
Ian Dodson is chief executive of AICertified (aicertified.net)

















