Subscriber OnlyOpinion

We are lecturers in Trinity College Dublin. We see it as our responsibility to resist AI

Even if all the known issues were magically resolved, we would still not want our students to use GenAI

The rise of GenAI underscores that we must fight for the university as a place of debate, of critique and of contestation. Photograph: Nick Bradshaw/ The Irish Times
The rise of GenAI underscores that we must fight for the university as a place of debate, of critique and of contestation. Photograph: Nick Bradshaw/ The Irish Times

What does generative Artificial Intelligence (GenAI) mean for higher education? In a recent opinion piece, Trinity College Dublin’s vice-provost Professor Orla Sheils expressed the view that, “if used wisely, AI has the potential to democratise third-level education in ways that once felt impossible”.

Meanwhile, UCD associate professor of English Adam Kelly wrote in these pages that large language models (LLMs) “are the most effective tool ever created to curtail the traditional work of universities, the cultivation of critical individual minds”.

As lecturers in Trinity College Dublin, we have carefully followed the debate about the role of GenAI in higher education. We have learned about how GenAI works and its educational, social, environmental and economic impacts. What we have learned leads us to deliberately avoid GenAI in our own work and to emphasise authentic human thinking and experience in our teaching.

We are not alone. While the dominant narrative may be that GenAI is an indispensable technology that will revolutionise education, health and our economy, we are part of a growing community of deeply concerned teachers who see it as their responsibility to resist this narrative of inevitability.

READ MORE

AI comes in different forms. The broader category encompasses a wide range of “machine learning” technologies, many of which have valuable applications (like predicting protein folds or tracking Amazonian deforestation). Our focus here is on GenAI chatbots (like ChatGPT, Gemini, and Claude), which are LLMs, popular for their ability to rapidly generate a limitless variety of text.

This impressive ability rests on the underlying algorithm which predicts the next most likely words in a sequence, based on the model’s training data and the user’s question. This flexibility is also a shortcoming. Chatbots are text prediction machines that have no understanding of the text they produce, nor any conception of truth.

As a result, they frequently generate plausible-sounding but incorrect responses, termed “hallucinations” (or “lying”). No wonder there is growing recognition that GenAI chatbots have been overhyped and oversold.

Beyond its unreliability, GenAI creates considerable environmental and ethical harms and impedes our students’ ability to think for themselves.

Orla Sheils: From personalised tutoring to improved accessibility, why AI can democratise higher educationOpens in new window ]

A lecturer’s job is to foster fundamental thinking skills. Photograph: Getty Images
A lecturer’s job is to foster fundamental thinking skills. Photograph: Getty Images

In Ireland, data centres have gobbled up all new renewable electricity generated over the past six years, slowing down decarbonisation and keeping us dependent on polluting energy sources. While GenAI companies claim the carbon footprint of a single chatbot question is negligible, this obscures the technology’s cumulative ecological impact.

Not only do tech companies carefully select the numbers they report, there is little transparency about the energy demand associated with permanent background operation of GenAI (like in Google search) and how overall demand will grow in a future where GenAI is even more deeply integrated into our lives.

It is not a coincidence that GenAI companies are seeking reinstatement of decommissioned fossil fuel and nuclear power plants and that Google and Microsoft are scaling back their climate commitments. Other ecological and health impacts relate to electronic waste, land use, noise pollution and water use, with the clean water demand of data centres exacerbating water shortages in communities worldwide.

I am a university lecturer witnessing how AI is shrinking our ability to thinkOpens in new window ]

Then there are ethical concerns. These include the uncompensated scraping of all of humanity’s creations – its science, its literature, its every self-expression – into training data sets, an act many consider theft; the bias inherent to training data and the power of tech companies to bias outputs; the exploitation of content-labellers, typically resident in the Global South, who are paid little for the distressing work of reviewing training data to remove disturbing material; emerging stories of chatbot-induced psychosis and parasocial relationships that are harming vulnerable users; and the leveraging of GenAI to cut costs and squeeze workers, threatening the future job security of our graduates.

But even if these issues were magically resolved, we would still not want our students to use GenAI. Pursuing higher education is not about learning what to think, but how to think.

A lecturer’s job is to foster fundamental thinking skills – such as reading and analysis, problem formulation, information synthesis, evidence-based reasoning and critical evaluation and reflection. Engaging with the challenging process of producing original and authentic pieces of written work is one of the ways in which our students build these skills and learn how to link thinking independently with how to communicate effectively, truthfully and knowledgeably. In essence, our focus is on the process, while GenAI provides a shortcut to the product.

We made a big mistake allowing our children to use social media. Now we’re doing it again with AIOpens in new window ]

By using GenAI to shortcut the learning process, students undermine the very thinking skills that make them both human and intelligent. As writer Ted Chiang put it, writing is strength training for the brain: “Using ChatGPT to write your essays is like bringing a forklift into the weight room.”

In our experience, chatbots also flatten and homogenise diverse student voices and perspectives, replacing them with a superficial, generic, simulation of humanity.

While GenAI proponents suggest the technology will democratise access, we question this vision. In a future of AI teachers, doctors and lawyers, only a small section of society will retain access to well-trained human experts. This is not democratisation; it is the very opposite.

The task of the university teacher today is to advocate for real thinking and real education and for the conditions that make this possible. Rather than contemplating the possible uses of GenAI tutors and therapist bots, we should invest in providing students with human learning and psychological supports, and in addressing the pressures that leave them time-poor and outcomes-focused, and which make shortcuts to grade maximisation irresistible.

As part of this process, we need to reappraise what a university education is for – not merely training for a future job, but training in the kind of thinking which alone can provide the basis for the wellbeing of a human and humane democracy.

The rise of GenAI underscores that we must fight for the university as a place of debate, of critique and of contestation. We should therefore protect and promote the independent thinking skills that enable such a debate now, and in the future.

The authors are lecturers in Trinity College Dublin: Clare Kelly (School of Psychology and Department of Psychiatry, School of Medicine), Katja Bruisch (Department of History), and Caitríona Leahy (Department of German). Harun Šiljak (School of Engineering) and Norah Campbell (Trinity Business School) also contributed to this article.