Meet the philosopher who thinks AI ought to have the same moral rights as humans

Better start sucking up to your microwave

We have to face the possibility that superintelligent computers may come to view us the way we view cats or dogs. Photograph: iStock
We have to face the possibility that superintelligent computers may come to view us the way we view cats or dogs. Photograph: iStock

Scared of artificial intelligence? Greg Heffley, the comic hero of Diary of a Wimpy Kid, has some good advice: “I’ve actually been preparing for when the robots take over by sucking up to the appliances in my house.” Cut to Greg – in The Meltdown, from the bestselling teenage book series – standing in front of a microwave saying “You’re looking good today. Did you just get cleaned?”

A prudent measure given the pace of technological progress. Even if AI doesn’t turn against us in some future robot war, we have to face the possibility of having a future status to superintelligent computers as cats or dogs are to humans now.

Whatever about the plans advanced AI may have for us, there is the nightmarish scenario that humans will be forced to accept that they are no longer morally special.

I say “nightmarish” because the existence of humanity is already under threat from atrocities across the globe today. Imagine how much more slaughter could be licensed by the idea that human beings are as valuable, ethically speaking, as an online application?

READ MORE

Dr Will Ratoff of Trinity College Dublin’s department of philosophy is alive to this concern. But from cold reasoning he has reached the conclusion that sufficiently-advanced artificial people would be our moral equals. Disturbing as that may seem, “in philosophy, we have to follow the argument where it takes us”, he says.

If engineers were to build artificial general intelligences (AGI) with human-equivalent cognitive powers, he asks, would it be okay to turn them off? “Or would it rather be morally equivalent to murdering a human being? I’m inclined to think it would be the latter.”

Ratoff, who is originally from England and who worked in the United States in AI ethics before coming to Ireland in 2022, discusses further as this week’s Unthinkable guest.

You claim AI that is cognitively equivalent to us would be our moral equals. What’s your justification?

“I think this follows from two plausible assumptions. The first is the functionalist theory of mind – the most popular and prominent theory of mental states in neuroscience, cognitive science, and philosophy of mind. According to the theory, we are conscious and possess minds in virtue of the functioning of our brains.

“But AI could be functional duplicates of us: rather than biological brains, they would have artificial neural networks that produce the same behavioural outputs in light of the same environmental inputs.

“The second assumption is the view that possessing the same cognitive powers as an adult human is sufficient for sharing our high moral standing. Why think this? Well, according to all the plausible theories of moral standing, we adult humans possess our high moral standing in virtue of our minds.

“Consider the following three-way contrast between a rock, a chicken and a human being. A rock has no moral significance. A chicken has some degree of moral significance: we cannot set it on fire just to watch it suffer. However, one can humanely slaughter a chicken for food. But one cannot ‘humanely slaughter’ a human and eat them for dinner. We have a hierarchy of moral standing.

“Intuitively, and according to philosophers, this difference is to be explained through appeal to the difference in cognitive powers. Rocks have no minds and thus no moral status ... We humans are cognitively sophisticated and can suffer in deeper ways than other beings. Thus, we have high moral standing.

How is AI affecting jobs for graduates in Ireland?Opens in new window ]

“Now drawing this all together ... an AI [that was a functional duplicate of us] would possess the same high moral standing as a human and thus permanently deactivating it without its consent would be equivalent to murder.”

AI medical tools downplay symptoms in women and ethnic minoritiesOpens in new window ]

If you’re right, would we be morally obliged to give AI the freedom to do what it wants?

“An entailment of [my] argument is that AI [that are] cognitively equivalent to us ought to be granted equivalent freedom to a human. The consequences are pretty sweeping: human-equivalent AI ought – all else being equal, at least – to be free to do as they please, to hold the reins of their lives in their own hands, to replicate themselves, to vote in elections, to run for elected office, etc.

Dr Will Ratoff of Trinity College Dublin’s department of philosophy
Dr Will Ratoff of Trinity College Dublin’s department of philosophy

“Of course, all else might not be equal: perhaps the risk to human wellbeing of AI being able to replicate themselves is so great that there ought to be a legal prohibition on AI doing that or being created with the capacity to do that. The question of the appropriate public policy to pursue here turns upon both the moral rights AI possess, if any, and considerations of the common good.”

If your view becomes commonly accepted, arguably it gives a further licence to those seeking to devalue human life. For that reason, shouldn’t we resist the argument for moral equivalence, even if theoretically there is a case for it?

“Perhaps some bad actors would regard the equivalence of human and AI moral standing as a reason to devalue human life. But I see this consideration as mediating in the opposite direction: we ought to be treating any human-equivalent AI we create in the same way as we ought morally to be treating other humans – namely, with appropriate respect for their autonomy, etc. Of course, whether that is what happens remains to be seen.”

Ciara O'Brien: Can AI make my life easier? I spent a week living and working with chatbots to find outOpens in new window ]

Dr Will Ratoff is speaking on Thursday, October 2nd at 7pm as part of a four-part, free public lecture series at Trinity College Dublin on AI: The Age of Disruption. The series starts on Thursday (September 26th).