I was recently on the campus of the University of California, Los Angeles, returning to a room I now make a point of visiting when I’m in the city. It has become something of a strange pilgrimage. The room is 3420 in Boelter Hall, a very ordinary, very small computer lab, preserved as it looked decades ago.
Standing outside its locked glass door, it’s impossible not to enter into a state of deep and unnerved reflection, and think about the profound impact this room has had on all of our lives. I wonder, if time travel existed, would we wind back the clock to 54 years previously and shout “stop”?
It was here, on the evening of October 29th, 1969, that a group of students under the supervision of Leonard Kleinrock, sent the first message to Stanford Research Institute over the Arpanet, the precursor to the internet. The intended message was “LOGIN”. However, the system crashed before those five letters could be sent. And so the first message sent over this network was “LO”. The machine that processed the message – essentially a router –, IMP No. 1, still stands in the room, a steel-encased thing that looks like a drawer-less filing cabinet. An IMP saying LO is a pretty poetic beginning to the internet.
Last week, Geoffrey Hinton, the artificial intelligence (AI) pioneer, quit his job at Google, and is adding his voice to the growing number of critics all saying the same thing about AI: slow down, this thing is moving too fast. I wonder if he has thought about a time machine too? But beyond that fantasy, the fundamental, existential choice regarding how far we go with digital technology has been asked of us for a long time: do we retain our sense of humanness and authentic selfhood, or do we forsake autonomy to machines?
‘Godfather of AI’ Geoffrey Hinton warns of ‘quite scary’ dangers of chatbots as he quits Google
We have entered a no man’s land, an age of dizzying transitions
Stop doomscrolling and read this instead. Your brain will thank you
Google Ireland head Vanessa Hartley: ‘Number one priority is making sure that we bring AI to Ireland’
In a recent interview with the Guardian, the computer scientist Jaron Lanier said of the rapid evolution of AI: “From my perspective the danger isn’t that a new alien entity will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”
This seems a likely outcome for many people. In this current phase of AI’s game – in which the generation of artificial imagery, sound and text causes a sense of doubt, wonder or suspicion – we encounter sensational images and think, “Is that real?” We experience this like a slow sensation of whiplash now, but it will quicken. It is the inevitable trajectory of a post-authentic culture.
The choice we have now, on an individual level, is to either continue to entangle ourselves in the digital mesh we’re caught in or interrupt the spectacle
I would add to Lanier’s prediction that the issue is not AI becoming more human-like, it is humans becoming increasingly digital entities. There’s that choice again: to be human or to be digital? Is AI the cause for concern? Or are we, in fact, the wannabe AI we need to be worried about?
We can already see the kind of insanity that living too digitally creates. There is the unhinged narcissism of those in active social media addiction, who manufacture digital selves that then refract to influence their “real” selves. There’s the radicalisation pipelines of rabbit-holing conspiracy theorists and culture war soldiers, who battle in halls of digital funhouse mirrors, catastrophising in a sort of deluded psychosis divorced from reality.
Lanier’s prediction brings me back to Kleinrock. In 2013, he spoke at the Web Summit in Dublin, and touched on the Fermi paradox: if there’s such a high likelihood that extraterrestrial life exists, then why haven’t they got in touch with us? One possible explanation is that by the time other civilisations become technologically advanced enough to contact distant civilisations, the technology they’ve created destroys them. But perhaps there is a less nihilistic “answer” to the Fermi paradox, one proposed in a 2022 paper by Michael Wong of the Earth and Planets Laboratory, Carnegie Institution for Science, and Stuart Bartlett, Division of Geological and Planetary Sciences, California Institute of Technology. That is, broadly, as civilisations become close to the technological advances needed to contact other civilisations intergalactically, the toll such “innovation” takes on their planets and societies is stalled in order to reprioritise and save themselves from “asymptotic burnout”, or ultimate crisis.
[ There's a high chance aliens are out there, so why haven’t we met them yet?Opens in new window ]
Contemporary digital innovations are often answers in search of questions. It will be the digital drudge work that AI takes over first – likely causing mass white-collar unemployment – before human creativity is erased by ChatGPT writing all of our film scripts and DALL-E making all of our visual art. If anything, because AI can replicate the most middle-of-the-road stuff, there’s a chance we may enter an extremely interesting phase in human art and creativity, one that strives to differentiate itself from the homogeny of AI-generated work. Perhaps people will become increasingly interested and stimulated by what is human-made (hard), not machine-generated (easy).
But being overly nudged or even overtaken by algorithms and artificial intelligence demands a personal reality check. The choice we have now, on an individual level, is to either continue to entangle ourselves in the digital mesh we’re caught in or interrupt the spectacle, step outside, and live authentically, in reality.