By Izzy Fraser, Second Year, English Literature
Artificial Intelligence. It’s suddenly appeared on the radar of humanity over the past few years, whether for you it’s through Chat GPT, Alexa, or the jarring bitmoji of My AI that sits smugly above your pinned friends on Snapchat. While you may roll your eyes because it’s a hot topic of discussion lately, you better get used to the discourse. This tech advancement isn’t going anywhere. In fact, it’s only going to get more prominent, more powerful, and more (in my not very niche opinion) dangerous. Despite this, the inevitability of AI growing in scale isn’t a reason to get comfortable with it, but rather a reason to actively reject it.
According to the Oxford English Dictionary, AI can be defined as ‘The capacity of computers or other machines to exhibit or simulate intelligent behaviour’. It is important to highlight that AI exhibits or simulates intelligent, human behaviour. It ‘adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that algorithms can acquire skills’. So, in very basic terms, AI recognises patterns and uses these to produce outcomes. In this sense, the more data we give AI, the more patterns it can find, and the more ‘intelligence’ it will exhibit. AI is present in almost all areas of life, and it undoubtedly has its positives; it can detect crime, it has allowed for significant medical advancements and (most handy for the everyday tech user) it has made life immeasurably easier.
One category of people that AI has made life easier for is students. ChatGPT is the most obvious example, with a recent poll I carried out on social media revealing that 58% of a portion of undergraduates from The University of Bristol have used the software. Of these students, the majority held that they used the resource for inspiration for assignments or to ‘summarise complex reading material’. However, there was a portion of students who claim to have copied directly from, or slightly reworded, the material that ChatGPT produced. Not only is this unfair, but language model-based chatbots work in the same way as other AI- they notice patterns, so answers are simply based on patterns of words from other data, therefore often lack depth or are simply incorrect. This was noted by one student who said ‘I once went a bit too far with it and then did really badly on my essay- I just find that it’s a slippery slope’.
Secondly, I honestly think ChatGPT contradicts the purpose of education. It prevents the need to think for yourself, and it takes away the creativity and nuance required for an individual to form their own argument. I’m not denying that ChatGPT makes it quicker and easier to do uni work, however, these are just a few of the many reasons why using AI is not as effective, fair, or rewarding as completing work independent from it.
In conversation with a senior lecturer, I asked his thoughts on students using AI and if, as a marker, he could notice AI generated work. The English Literature lecturer held that ‘AI generated work sticks out like a sore thumb’ and noted ‘random hallucinated facts and quotations’ that often come with sections produced by the software. It is evident that, when copied verbatim, AI generated work can generally be noticed by lecturers.
However, what cannot currently notice ChatGPT copied work is ‘Turnitin’, the plagiarism checking software used by UoB. There have been suggestions that AI itself could be used to check for AI generated work, however, while this may work to spot this type of plagiarism, the wider consequence would be overwhelmingly negative. I return to the point that any (and all) information we give AI makes it able to mirror the intelligence of humans more accurately. You may ask: What is the problem with this? The simple answer to that is that we have no idea what will happen. I don’t (just) mean killer robots like in movies; I mean that there is every possibility that AI, with enough data on human experiences, thoughts and behaviour, could replicate human sentience which, in my opinion, would be the beginning of the end.
AI is not inherently bad. It has its uses, however, these need to be carried out in a controlled environment with strict rules and regulations. What is not necessary is for AI to be at the disposal of the public. One necessary step that must be made while we still have control over this advancement, and to prevent AI from becoming increasingly powerful, is for it to be banned in universities. This would not only make assessments fairer, it would restore the individual’s creativity in education. On a wider scale, it would put a stop to the immeasurable amounts of information that students feed into AI that slowly but surely is enabling the software to develop ‘intelligence’ beyond our understanding or control. We must make this change before it is too late.
Featured Image: Milan Perera
How do you feel about AI's emergence into university life? Should it be banned on an academic level?