Before we can come to terms with the rise of chatbots, we have to grapple with a difficult scientific concept: bullshit. It is what chatbots do supremely well.
The best guide to this crucial notion is the classic essay by the American moral philosopher Harry Frankfurt called, simply, On Bullshit. Frankfurt published it more than 20 years ago and I don’t imagine he was thinking about AI language generators, but it is starkly relevant to how they function.
Frankfurt makes a critical distinction between the liar and the bullshitter. It helps us to understand that whatever the problem with chatbots may be, it is not that they tell lies.
The liar is actually concerned with truth. If I write here that I am currently in Timbuktu, this is a lie only if I know damn well that I am not.
‘Godfather of AI’ Geoffrey Hinton warns of ‘quite scary’ dangers of chatbots as he quits Google
Why do so many news sites look so boringly similar? Because they have to play by Google and Meta’s rules
Bosses struggle to police workers’ use of AI
‘People make assumptions about us’: How third level is becoming a real option for people with intellectual disabilities
The liar cares enough about the facts to deny or obscure them. As Frankfurt puts it, “Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game.”
I have to make choices about what I am going to write, and those choices are the product, not of linguistic patterns, but of my values
But the bullshitter is not playing the game at all. “The fact about himself that the bullshitter hides… is that the truth-values of his statements are of no central interest to him… It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.”
It is, writes Frankfurt, “just this lack of connection to a concern with truth – this indifference to how things really are – that I regard as of the essence of bullshit”.
This seems to me to cut to the heart of what a chatbot does. It is constitutionally indifferent to how things really are.
[ Fintan O’Toole: Between aspiration and reality we build a bridge of bullshitOpens in new window ]
It has no convictions – even about the difference between a lie and a fact. It is concerned only with the reproduction of the linguistic patterns it finds in its datasets.
I asked ChatGPT to “write a column in the style of Fintan O’Toole”. It chose, without prompting, to write about Ireland and immigration.
It started reasonably well: “Ireland is a country of contradictions. On the one hand, we pride ourselves on being a welcoming and hospitable nation, renowned for our warmth and generosity of spirit. On the other hand, we are haunted by a history of exclusion and division, which has left deep scars on our collective psyche.”
It began to slip when it suggested that Irish people are neatly divided between those who want to close our borders completely and those who feel “a moral obligation to help those in need” – a simplistic division that shows little concern with “how things really are”.
And then it lapsed into pure bullshit: “In reality, the issue is more complex than either side would have us believe. There are no easy answers, no quick fixes. But one thing is clear: we cannot continue to ignore the plight of those who are caught up in this system, who are denied the basic human rights that we take for granted.”
It continued: “Ultimately, the debate around immigration is not just about policy or politics, it is about who we are as a nation. Are we the welcoming and hospitable people that we like to think we are, or are we something else entirely? The answer, as always, lies somewhere in between. It is up to us to find it.”
I am aware that some readers may well think I do indeed write this kind of stuff. But, while I may well write rubbish, I don’t write bullshit.
What’s the difference, after all, between what ChatGTP does and what a skilled college debater can train herself to do?
Why? Because the difference between what I would write on this subject and what ChatGPT concocts is that I have an opinion and the bot does not.
The bot can produce an “on the one hand” and an “on the other hand”. It has no capacity to make a judgment between these binary oppositions.
For me, there would be something at stake. I think and feel things and I am trying to persuade you that you should think – or least think about – them too. So I have to make choices about what I am going to write, and those choices are the product, not of linguistic patterns, but of my values.
[ Fintan O’Toole: Supreme Court calls time on government waffleOpens in new window ]
For ChatGPT, values do not exist. It produces words but it has no way of caring about their meaning in the world. And this is the essence of bullshit – it is language in which nothing is at stake.
Those passages generated by ChatGPT for this little experiment seem to conform precisely to Frankfurt’s description of the bullshitter: “she concocts it out of whole cloth; or, if she got it from someone else, she is repeating it quite mindlessly and without any regard for how things really are”.
What’s significant about this aptness, though, is that it reminds us that chatbots are not inventing bullshit. They may even help us to become more sensitised to it.
What’s the difference, after all, between what ChatGPT does and what a skilled college debater can train herself to do? Both learn the art of weightless language; both are indifferent to the value of their statements.
In her 2015 essay Even If You Beat Me, Sally Rooney wrote this about her facility as a world-class debater: “You think the concepts, and then the concepts express themselves. You hear yourself constructing syntactically elaborate sentences, one after another, but you don’t necessarily have the sensation that you are the person doing it.” Does that depersonalised linguistic flow not sound eerily like the workings of the machine?
But it’s not just college debating. If you have the misfortune to have to read or listened to Dáil debates, you will have the same sensation of party backbenchers mechanically vocalising scripts they have not written and words that carry no weight of conviction or emotion.
Bullshit is all around us. We expect to hear it every time we watch or listen to the news and are exposed to those who have been taught to have the glib facility to strip words of their truth value.
“The essence of bullshit,” writes Frankfurt, “is not that it is false but that it is phony.” Chatbots may well become, in a narrow sense, increasingly accurate, but they can never cease to be phony.
In this, though, they may actually help us to understand what phoniness is. It is expression denuded of values.
[ The Irish Times view on artificial intelligence: promise and perilOpens in new window ]
The world will be flooded with this kind of expression, but paradoxically this tide of fakery may also drown the human chancers. It will be like what happened with textiles in the industrial revolution: the easy availability of mass-produced, machine-made stuff will make the hand-weavers of bullshit redundant.
We are going to have to learn again how to distinguish utterances, not just by their content, but by their intent. Machines do not have intentions. People do – and it is by them that we will be known.