Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

In AI we trust – or do we?

Always having a ‘human in the loop’ is one suggestion for ensuring trustworthy artificial intelligence

While machine learning models and algorithms do not inherently have biases, they are trained on and learn from human-generated data in many instances
While machine learning models and algorithms do not inherently have biases, they are trained on and learn from human-generated data in many instances

Artificial intelligence is devoid of emotion and makes decisions based on the information available to it and the biases of those who trained it. That has the potential for unfairness.

It’s something the EU’s forthcoming AI Act is tackling head on. It hopes to ensure that AI developed and used in Europe aligns with the bloc’s rights and values, including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing.

AI systems with an unacceptable level of risk to people’s safety will be prohibited, such as those used for social scoring – the classification of people based on their social behaviour or personal characteristics.

Providers of foundation models – powerful machine learning algorithms trained on vast amounts of text data such as that hoovered off the internet – will have to assess and mitigate possible risks to health, safety, the environment, democracy and the rule of law.

READ SOME MORE

Generative AI systems, such as ChatGTP, will have to comply with transparency requirements too, disclosing that the content was AI-generated, helping to distinguish so-called deep fake images from real ones and ensuring safeguards against generating illegal content.

Detailed summaries of the copyrighted data used for their training will have to be made publicly available.

That’s all good, because the risks are very real.

“While machine learning models and algorithms do not inherently have biases, they are trained on and learn from human-generated data in many instances. This data can contain various biases that exist within humans and societies,” explains Emmanuel Adeleke, artificial intelligence and data partner at professional services firm Deloitte

“Many investigative studies have revealed that AI systems can learn and disseminate these biases on a large scale. While many organisations are considering adopting AI it is imperative to ensure that relevant frameworks are in place so that AI systems do not learn and perpetuate the same biases as humans.”

Initiatives such as explainable AI, software which is programmed to describe its decision-making processes, will help, as will the fact that AI constantly takes in feedback, making minor corrections, to continually improve performance.

Such safeguards are important because AI isn’t out there in the future, it’s already here, from the recommender systems on your movie platform to the lane assist in your car.

As with any technological advance, it brings both opportunities and challenges, says Karl Flannery, chief executive and co-founder of Storm Technology.

This is a technology society has to adapt to. We have to put in place the regulations to have trustworthy AI in Europe

—  Karl Flannery, Storm Technology

“There will be unintended consequences. But I do believe AI can be a force for good if, for example, you need specialised knowledge or capacity, whether medical or tax accountancy, that you might not otherwise be able to gain access to or afford,” he says.

The rapid take up of ChatGPT has been a “wake up call” for legislators, he adds.

“This is a technology society has to adapt to. We have to put in place the regulations to have trustworthy AI in Europe,” he says.

There is fear now around both the power of AI and the pace at which it is having an impact. “Absolutely I can understand the fear,” says Una Fitzpatrick, director of Technology Ireland at IBEC. “At the end of the day humans have built AI and humans have bias, so the risk is that they have built bias into the system

She too believes the key to assuaging this fear, and to mitigating risk, is to develop trustworthy AI. That includes ensuring there is always a “human in the loop” – both at the training and the testing stages of building an algorithm.

“The lexicon around AI is developing all the time and now we have XAI, or explainable AI, which is very linked to trustworthiness because if you can explain it people and society are comfortable with it,” says Fitzpatrick.

But part of the problem is that even where AI is trained not to take protected characteristics into its decision making, such as gender or ethnicity, it might learn to use proxy characteristics instead, cautions Cal Muckley, professor of operational risk, banking and finance at the UCD College of Business.

For example, an AI-driven software programme selecting CVs for an engineering firm might take into consideration the fact that most current staff are male. And while it will not select on the basis of gender, it might skew towards certain sports, or all-male schools.

Muckley specialises in loan decisions. In that field the risk is that the AI could make proxy decisions based on its “knowledge” that a particular postcode has a high proportion of people of a particular ethnicity, perhaps charging these people a higher interest rate.

The fact that models are trained using previous decisions can further distort the results. Equally, in the above scenario, the person charged a higher interest rate will be more likely to struggle to repay the loan, describing what then becomes a vicious circle.

The EU’s AI Act is designed to provide guardrails but it’s not yet clear what they will consist of, Muckley points out.

But there are positives too, including the fact that machine learning makes decisions based on concrete data. “That opens the door to inclusivity, especially for people with ‘thin files’,” he points out including, for example, people without a credit history.

There is just one problem. “At the heart of this are the right to privacy and the right to not be discriminated against. Traditionally privacy was there to protect you against being discriminated against but if the information is private, how do they get it?” asks Muckley.

“So there is now a tension between the two that hasn’t been fully figured out. The AI Act will have guardrails so that you observe the sensitive class data but GDPR makes it impossible to record that data and use it in that way. That’s a hot topic on the legal side.”

Sandra O'Connell

Sandra O'Connell

Sandra O'Connell is a contributor to The Irish Times