Walk into any university library, and on the screens of many students’ computers, a bright white website can be seen. The words “what can I help with?” appear before the student. One might ask it to summarise a literature review that they need to talk about in an upcoming tutorial, another might ask it to double-check their calculations before their maths lecture. The more audacious student might even ask it to generate their essay for them.
Since its launch in 2022, ChatGPT, an advanced generative artificial intelligence (GenAI) chatbot that can engage in human-like conversations and generate content, has reshaped conversations surrounding academic integrity, plagiarism and what it means to be a student at university in the first place. Universities across the country have released statements and guidelines on the use of GenAI, but the reality of how most students engage with these tools is complicated. While some students embrace them as a sidekick to their studies, others stay far away.
These days, students can essentially “shop around” to find the best GenAI chatbot for their academic needs. Since ChatGPT’s launch, other GenAI chatbots have popped up, such as Gemini and DeepSeek, each carrying their own strengths and limitations.
Academics have expressed varying opinions on the appropriate use of GenAI chatbots, or whether they should be used at university at all. In a recent article published in The Irish Times, Trinity lecturers Clare Kelly, Katja Bruisch and Caitríona Leahy argued they felt a “responsibility” to resist AI. Others, such as Alan Smeaton, a professor of computing at DCU, have said a more nuanced approach should be taken, describing AI as a tool students need to learn to use, rather than having it banned completely.
RM Block
But how do students feel? Those entering their fourth year of their academic studies this month have had access to GenAI chatbots since their first year.
Ruth McGee (21) from Dundalk, Co Louth, started an arts degree in Maynooth University last year, in history with media. One of her first lectures was on plagiarism, which included “quite a big section on AI”, she says.
“I think specifically because of the degree I do, [lecturers] are very against” AI, McGee says. “So much of my two subjects is about developing your writing and reading skills.”
However, “people definitely use it”, she says of AI. “Stuff like this has existed in some form forever. Before there were the dodgy essay-writing services you could pay, or people used to occasionally get classmates to do their work or whatever, but it’s not the same.”
McGee says she “hates admitting” that she once used ChatGPT to assist her with an assignment. “I had a particular essay topic, and I could not for the life of me come up with an idea for it, so I used it to generate ideas and that was kind of it.”

However, upon fact-checking the ideas ChatGPT provided, she found that some of the information turned out to be false.
Ironically, in an assignment where she did not use a GenAI chatbot to assist her, McGee says, she was suspected of using one.
She says she was told that “the last couple of paragraphs looked suspicious, because of how they were written”. But McGee insists no GenAI chatbot was used, “it was just badly written ... I had dropped in a new source at the end of it”.
She says she didn’t contest it, saying it “was the last [assignment] of the summer, and I had a really busy summer”. She continues: “I had passed the module, so I was like, I think I’ll just leave it.”
Like many students, she has mixed feelings about the extent to which GenAI should be used to assist with assignments. “I can understand if you’re in college and you’re commuting, you’re working a part-time job ... sitting down to write an essay and spending hours doing it can be really difficult and stressful and exhausting.”
On the other hand, she feels that “if you pick that degree, you should put in the work for it”.
“If you [use] ChatGPT, say, for an essay or an assignment, or even part of it, and you get away with it, and you get a better grade than someone who sat and worked for it, I don’t think that’s fair.”
Aoibhínn Clancy (22) from Dublin, who has just finished her degree in history and political science at Trinity College Dublin, says she has watched the prevalence of GenAI increase among her peers during her time at university.
When she started her degree in 2021, “AI wasn’t really as big of a thing as it might be for people starting college today”. However, during her third and fourth years, she says, she saw an increase in the number of students using it to help them with their academic work.
Throughout her degree, Clancy says, she avoided AI and never signed up to ChatGPT. In fact, she says she was “quite scared” that she would “get in trouble with the college for using AI-created or generated work”.
“I morally find it hard to bring myself to actually use [GenAI]”, Clancy continues. “If you’re in college, you worked really hard to get there, you should want to do the course that you’re in. I think by offloading that work to AI, it’s kind of inherently going against what college is, kind of about self-learning.”
She feels that avoiding GenAI chatbots “probably did put me at a bit of a disadvantage, because I know others probably were using them for ideas and inspiration”. However, she thinks that AI would not have been much use to her anyway, particularly for her dissertation, which focused on Irish legislation relating to the crime of rape during the 1970s, and relied heavily on archival sources she found herself, rather than existing literature. “I don’t think a chatbot would’ve been much help”, she says.
Students’ conversations surrounding GenAI are not limited to the field of academic study. The debate has also entered the realm of extra-curricular activities. Since finishing her degree, Clancy has taken up the role of editor-in-chief at one of Trinity’s student newspapers, Trinity News. She has decided to take an editorial stance against the use of GenAI by students within the paper.
“We are working on updating the Trinity News style guide to state that AI-generated articles are not allowed in the paper, or not encouraged to be submitted,” she says.
“Obviously it’s harder to implement a stance that people can’t use it to generate ideas, because there is no way to police that fully, but the most we can do is not use AI-generated pieces or AI-generated art.”
Software aimed at detecting AI-generated content exists but is not foolproof.
Another Trinity student, Neasa Nic Corcráin (22) from Wexford, says she has “gotten into disagreements” while participating in group projects over the use of GenAI.
The environmental science and engineering student says “it’s kind of obvious when people use it for calculations and stuff, or coming up with conclusions, because they’re very base level, and they don’t really have any critical analysis going into it.”
Nic Corcráin started college the same year that ChatGPT was launched. “In the first semester of college, everyone was not using it, in the second semester, it was crazy the change-up, everyone was just using it for everything.”
Nic Corcráin says she is particularly concerned about AI’s impact on the environment. In her course, she learns “so much about climate change, and the impact that data centres are having on the climate, because it uses so much water”.
She also argues that the use of AI can mean that “people aren’t critically thinking” any more. She feels that students who use GenAI consistently “can’t see another viewpoint”.
She worries about the effect GenAI might have on students’ reasoning skills. “It’s very, like, black and white now, which is scary, because obviously the world is not black and white, and morals and politics and everything is very grey all the time.”
While Nic Corcráin says she doesn’t use GenAI to write assignments or research, she has used it to organise a bibliography and make a study timetable.
For others, AI represents an opportunity, rather than a threat. Eoghan Collins (19) from Galway, who is in his third year studying electrical engineering at the University of Galway, has been using AI to develop an app that he hopes will become what he terms a “study companion” for students, turning PDFs, notes and exam papers into structured study guides and revision exercises.
“A lot of things I was doing manually weren’t really helping me to learn”, says Collins. He is talking about “making flashcards, planning what’s on the exam.”
His app, StudySmith, “looks at all the stuff in your course, it structures stuff so it can help you across the semester, rather than answering your questions there and then”.

Collins’s app aims to use “AI to direct your learning, so you don’t have to waste any time, you don’t worry about missing things. It does all the heavy lifting for us.”
He says those who have tested the app think it’s “pretty cool”. “It’s a fun, different way” of learning, he adds.
When it comes to using GenAI within his degree, Collins says he tries to limit his usage to fact-checking, allowing it to act as “a second pair of eyes to look over something”, which he says is a “fairly common use” of the technology by students .
He says that “a lot of students” would use it to answer questions and help with assignments, but notes that educators have become “a bit more cautious” as a result, with some deciding to switch homework assessments to in-class assessments to curb the use of GenAI chatbots.
However, this practice is “lecturer-specific”, Collins says. “I even had an assessment last semester where we were told to use ChatGPT to do something, just to see what kind of job it did.”
Dr Mary-Claire Kennedy is chairperson of the National Academic Integrity Network (NAIN), established in 2019 by state agency Quality and Qualifications Ireland (QQI), which is responsible for promoting the quality, integrity and reputation of Ireland’s further and higher education system. Kennedy says “there have always been threats to academic integrity right back to the very start of education, and there always will be, and we can’t even foresee what is going to be the next threat or the next challenge”.
She says that QQI has produced guidelines for educators that have been “really instrumental for universities and other training providers in terms of their own policies”.
[ How graduates can navigate the use of AI in job searching and hiring practicesOpens in new window ]
However, she feels that further work can be done within institutions to make students more aware of guidelines on GenAI.
“Even if your institution has a guideline [on GenAI], approximately a third of learners are aware of that. So while I think institutions are doing their very utmost to have policies and good practice guidelines, maybe how they’re being incorporated into practice needs a bit of work.”
Kennedy says “it takes some time to develop these guidelines and then to translate them into practice is the next stage. So I think it’ll be interesting to see that emerge over the next year.”
Educators are “aware that there are practices on the ground”, she says, and “rather than that traditional thing of policing and stopping, educational practice and our attitude and perspectives toward higher education need to shift slightly to move with this technology, as opposed to trying to block it out.”
She adds that “there will also then always be the case for highly secure assessment, where you know there is no possibility to utilise AI, where we’re trying to be sure and be completely confident in somebody’s knowledge or skills. So there is a balancing act to be done here.”
Regarding how different disciplines could navigate the complex issues at stake, Kennedy says that “good practice principles are the way to go, and then hand it over to the experts in the field, people in practice within an academic discipline, the educators, getting feedback from the students and navigating the course as the academic fields begin to understand what it means for their own disciplines.”