Generative AI (GenAI) has moved from the sidelines to the centre of how we work, communicate, and make decisions. It’s powering everything from financial advice to healthcare recommendations and automating content creation. Yet, as AI becomes more embedded in our daily lives, we have to ask ourselves a crucial question: can we trust it?
This question was brought into sharp focus by a recent Deloitte survey of over two and a half thousand people that we conducted in Ireland, which offered some interesting insight. While over half of consumers (52%) trust AI-generated medical advice, that trust diminishes to 42% when a doctor uses AI to assist in delivering that same advice. Here’s where it gets interesting: the findings show that while consumers trust AI to perform specific tasks, that trust erodes when professionals use AI as part of their own decision-making processes. This points perhaps to a deeper unease.
What our survey tells us is that while many are ready to trust AI directly, they also become wary when professionals incorporate it into their professional work practices. It’s not AI itself that worries consumers – it’s the perception that human professionals might rely too heavily on it, potentially diminishing the value of their own expertise.
Generative AI refers to algorithms that can create new content in various modalities based on the data they’ve been trained on. Whether summarising news articles, recommending investments, or providing medical advice, GenAI is helping people make more informed decisions. In fact, Deloitte’s Irish survey shows that nearly three-quarters (73%) of respondents trust AI to generate summaries of news articles, while 67% trust it to create better work experiences for employees.
The secret to cooking a delicious, fuss free Christmas turkey? You just need a little help
How LEO Digital for Business is helping to boost small business competitiveness
‘I have to believe that this situation is not forever’: stress mounts in homeless parents and children living in claustrophobic one-room accommodation
Unlocking the potential of your small business
Building trust
The potential of AI is enormous, and companies across all sectors are investing heavily in its development. Deloitte’s survey provides a snapshot of both optimism and caution. While 73% of respondents believe AI can help businesses improve their products and services, just over half (57%) think that regulations would increase their confidence in AI use. While early adopters are already embracing the technology, the wider public remains more sceptical.
Of course, trust in AI cannot be assumed – it must be earned. This is where transparency and regulation come into play. The European Union’s AI Act is one of the most comprehensive efforts to ensure AI is used responsibly. By categorising AI systems based on their risk and requiring transparency about how they work, the Act aims to build a more robust framework for AI deployment. It’s a very necessary step to address concerns around privacy, data security, and bias.
A multifaceted approach
But regulation alone isn’t enough. Businesses must also take a proactive role in ensuring trustworthy AI adoption. In our work with clients, we find that people want to understand how AI can deliver meaningful value across their businesses, but they also want to feel in control of that. Client organisations are now looking beyond the standard efficiency gains to a more personalised solution that aims to improve the end-user experience. This includes everything from generating customised scripts for promotional advertisements, to providing real-time assistance to employees.
In practice, we are also seeing significant investments in organisations building their internal capabilities through data modernisation, digital transformation, new talent and the upskilling of existing staff.
‘This is about enhancing human expertise, not replacing it’
But ensuring AI is used in conjunction with, rather than in place of, human judgment remains crucial. Every single tool, whether it’s a hammer or an algorithm can be used in a benign or malignant way. Building trust in AI will continue to remain a human challenge. Some of the big concerns we hear is about the potential for AI to create misinformation. However, these are human-created algorithms.
Those using AI need to know that while it can assist in diagnosing a condition, recommending a treatment or improving time management, the final decision still rests with a qualified professional. Remember, this is about enhancing human expertise, not replacing it.
While the public is excited about AI’s potential, there are also valid concerns that need addressing around data privacy and security. Businesses must not only comply with regulations, but also invest in building AI systems that are fair, unbiased, and transparent in how they operate.
Human judgment
Much of the public discussion around AI focuses on visible applications – chatbots, automated content creation, and personalised recommendations. But behind the scenes, AI is transforming industries in ways that are less obvious, but equally dynamic. In healthcare, for instance, AI is being used to analyse vast data sets of genomic information, leading to more personalised treatment plans. In finance, AI systems are reshaping how banks assess creditworthiness, offering more tailored products based on a comprehensive analysis of consumer data. Another example is an automated real-time price adjustments based on customer and competitor data.
Yet, these applications come with new regulatory challenges, and the European Union’s AI Act has categorised systems like these as high-risk, meaning they are subject to stricter compliance requirements. For businesses, this creates a need to carefully balance innovation with the need to meet regulatory standards.
The future of AI is exciting. When used correctly, AI can empower professionals to make better decisions, faster, and with more accuracy. But its success will continue to hinge on one key factor: trust. To build and maintain this trust, businesses must focus on making AI understandable and transparent. This includes clear communication about how AI is being used. Ensuring AI is used responsibly, in conjunction with appropriate safeguards and keeping human judgment at the centre of the process, is essential to unlocking its full potential.
Deloitte’s Emmanuel (Manny) Adeleke leads the delivery of AI and Data services across Ireland. Visit deloitte.ie to learn more