Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

Sharper AI skills key to firms beating cybercriminals at their own game

As Irish businesses embrace generative AI to boost productivity, cybercriminals are weaponising the same tools

AI has amplified the scale and sophistication of cybercrime but, at the same time, companies are using AI to spot threats faster and respond more quickly
AI has amplified the scale and sophistication of cybercrime but, at the same time, companies are using AI to spot threats faster and respond more quickly

Artificial intelligence, particularly generative AI, has been leapt on by Irish businesses, but the technology is a double-edged sword when it comes to a company’s cybersecurity.

Large language models (LLMs) such as ChatGPT have enabled cybercriminals to put a new spin on old classics when it comes to targeting businesses.

“Generative AI, LLMs, and AI tools that create realistic audio, video and images have fundamentally altered the cyber threat landscape,” says Ivan Jennings, senior manager for solution architecture at Red Hat. “Over the past 12 to 18 months, its impact has been seen in two key areas: amplifying the scale and sophistication of existing threats and creating entirely new attack vectors.

Ivan Jennings, Red Hat senior manager for solution architecture
Ivan Jennings, Red Hat senior manager for solution architecture

“Traditional phishing emails were often identifiable by poor grammar, awkward phrasing and generic content. Generative AI has eliminated these red flags. Threat actors can now use LLMs to create grammatically perfect, contextually relevant and stylistically convincing emails.”

With AI able to mimic the tone and style of even a usually trusted contact, it has become more difficult for humans to differentiate a fake message from a real one. There’s also a risk that workers are unwittingly turning over sensitive company data to other actors while innocently using AI tools.

“The risk of inadvertently leaking your data to an AI company has increased. Many employees will share your internal data with AI to get an immediate task done,” says Paul Browne, AI, cyber and digital product portfolio manager at Enterprise Ireland

“They don’t realise that the AI company has the right to use this data in future training runs. A future version of the AI may be so good as to put your company out of business.”

Paul Browne, Enterprise Ireland AI, cyber and digital product manager
Paul Browne, Enterprise Ireland AI, cyber and digital product manager

Browne also warns that the rush to use AI to its fullest extent can also leave companies more vulnerable to hackers.

“By gathering and distilling company data, a company often gathers key information into one place and makes it more valuable if an attacker is able to steal it,” he says. “AI professionals and cybercriminals, while obviously very different groups, share one thing in common: the value they place on your company data.”

Jennings says there is a skills gap that needs to be addressed to meet the challenge of AI.

“The biggest challenges companies face are a complex mix of skills, governance and tooling, with governance and the skills to implement [AI] effectively emerging as the most significant and interconnected hurdles,” he says.

“Governance and standardisation are arguably the most critical and complex challenges for companies when integrating AI. There is a lack of universally adopted standards and best practices for integrating AI into cybersecurity.”

Puneet Kukreja, EY head of cyber in Ireland, holds a similar view when it comes to skills, and contends that there needs to be a general rethink among businesses about what skills should be prioritised.

Puneet Kukreja, EY head of cyber in Ireland
Puneet Kukreja, EY head of cyber in Ireland

“Cybersecurity professionals are not typically trained to understand algorithmic decision making, while data scientists often lack context for security risk,” he says.

“The EU AI Act and NIS2 are raising the bar for accountability, especially in regulated sectors. The solution to this lies in building cross-functional governance that connects cyber, legal, data and operations in a unified model of responsibility.”

Although concerns abound, there are many reasons to be hopeful. AI is proving to be a useful tool for those fighting cybercriminals.

“Companies are using AI to spot threats faster, respond more quickly and ease the workload on security teams. It’s especially useful for real-time monitoring, detecting unusual activity and triaging and automating responses to incidents,” says Leonard McAuliffe, partner in the cybersecurity practice at PwC Ireland.

Leonard McAuliffe, PwC Ireland
Leonard McAuliffe, PwC Ireland

“It can automate routine tasks like threat monitoring, alert triage and incident response. This frees up human teams to focus on strategy and decision making. Whether you’re a start-up or a global enterprise, AI helps scale your defences and stay ahead of fast-evolving cyber threats.”

Indeed, the very methods used by would-be hackers can be used by a company’s own IT security team or partners to improve staff awareness and behaviour.

“AI enables the security professional to craft realistic-looking phishing messages for this scenario. Based on experience, it is often more technically educated people that get caught out,” says Browne.

“AI enables time-poor cybersecurity professionals to carry out these tests and to respond to colleagues.”

The real challenge, according to Browne, is for businesses to know when to take the plunge and engage AI in their cyber defences.

“For most Irish companies, the challenge is finding the time to take the first step. Most business people we speak with have cybersecurity on their agenda already, but as a ‘next week’ item since everybody is so busy,” he says.

“The challenge is that as AI makes it more cost-effective for attackers to target smaller companies, you may not have until next week before a cyberattack.”

Kukreja says companies can’t simply wait around when it comes to using AI in their cyber strategy. The reality is that it’s an area that needs to be embraced now.

“AI will be a core enabler of cyber resilience across all industries, from accelerating threat detection to automating incident response, AI has the potential to scale capabilities that were once limited by human bandwidth. It can spot patterns in complex systems, react faster and support strategic decisions when time matters most.”

Emmet Ryan

Emmet Ryan

Emmet Ryan writes a column with The Irish Times