An Garda Síochána will soon be able to employ an “early warning” system driven by artificial intelligence (AI) to alert them of potential future criminality such as the targeting of asylum seeker centres.
The Irish-led, EU-funded initiative, known as the Vigilant Project, will be able to scan social media for disinformation that may lead to criminality, such as riots or terrorism.
The project is funded with a €4 million grant from the EU’s Horizon Europe scheme and is led by researchers from Trinity College Dublin and UCD. It comprises academics and figures from industry and government research centres across Europe.
“It’s a platform of tools which is designed for European police authorities to detect, analyse and investigate disinformation that links to criminal activities,” said Dr Brendan Spillane, the project’s principal investigator and an assistant professor at UCD.

Podcast: can AI help Gardaí detect future crimes?
The tool will help police forces monitor social media for warning signs of potential criminal activity.
“The type of thing that we’re talking about is people who are promoting violent attacks on individuals or marginalised groups in society, most recently promoting attacks on migrant centres or on IPAS [International Protection Accommodation Services] centres in Ireland or migrant camps across Europe,” said Dr Spillane.
It could also be used to monitor for the promotion or sale of false medical cures, fundraising for terrorist organisations and the spreading of extremist doctrines, he said.
The tools will be able to analyse text and images as well as the online networks of those sharing disinformation.
Currently police forces are dealing with disinformation in a “reactive” manner, said Eva Power, Vigilant’s project manager and a Trinity College Dublin academic.
Vigilant will “help police become more proactive”, she said.
She cites the Dublin riots of November 2023 when millions of euro of damage was caused to the city centre following a knife attack on a group of schoolchildren. The rioters were motivated in part by large amounts of false information that appeared on social media in the aftermath of the attack.
“[Police forces] are completely on the back foot. A lot of that is down to their lack of access to tools. This is throughout Europe,” she said.
The Vigilant toolkit will only monitor information publicly available on social media. However, it will automate the process, making it much faster and less resource-intensive.
Dr Spillane said the technology did not monitor the entire internet. “Not only do we not do it, but the police don’t want or need us to do it,” he said.
Instead it monitors areas for limited periods to establish if there is cause for further action. He compared it to police forces temporarily assigning additional patrols to a trouble spot.
The platform is at a “fairly advanced” stage but there is still technical work to do, Dr Spillane said. They hope to deploy the initial version within 10 months.
The developers are working with four police forces in the EU who will be the first to receive a live version of the software when it launches. The Garda are not part of this group but are part of a “wider community of early adopters”, said Ms Power.
“They’ve been great. They’ve been into us here. Effectively, we want to build it to suit their needs.”
Asked about the potential for the technology to be used inappropriately or for anti-democratic ends, the developers said Vigiliant is built from the ground up with ethical considerations in mind. For example, it will not employ facial recognition technology, Ms Power said.
“We think we’re the right type of organisation to be developing something like this. A lot of us are coming from universities which are typically socially liberal, very conscientious, ethically-minded organisations,” Dr Spillane said.
He said the project has so far only been engaging with agencies or countries that have “a large amount of respect for human rights.
“We don’t speak to the ones that might be sliding a little bit farther away. They haven’t made contact and we won’t be talking to them.”