Skip to content

Why we should be talking about AI in higher education

Could the higher education system benefit from AI, and what are the chances of it being implemented in the near future?

By Mark Ross, Second Year, French and Politics

Artificial Intelligence is everywhere: phones are talking back to us, cars are driving themselves and machines are even writing articles (not this one, I promise). However, despite lecturers only appearing in pixelated form nowadays, the education sector is still firmly in the hands of humans. Could the higher education system benefit from AI, and what are the chances of it being implemented in the near future?

First, a much-needed definition. Artificial Intelligence is the umbrella term for a computers’ ability to ‘mimic the perception, learning, problem-solving and decision-making’ of the human mind. It is the processing of data into useful, human-friendly outcomes. For example, a TV making personalised Netflix recommendations is considered AI! It is a complex subject and difficult to summarise concisely; for more information, maybe just ask Siri.

To understand it’s potential, let’s look at how AI is currently used. Standard, mundane tasks are easy pickings for even basic algorithms. Matching Uber drivers to passengers, for example, is a clear example of AI processing data in order to perform a function which was previously reserved for human operators.

But AI is also tackling increasingly complex tasks. Algorithms predict what we are going to buy and target ads accordingly. They can ‘outperform’ doctors in diagnosing breast cancer and can even create works of art. The list is endless.

Returning to education, this is enough to put a smile on any overworked lecturer’s face. The potential is clear. Basic marking and information processing could be automated within seconds; no more waiting for the results of an assignment you submitted months ago.

This could apply to all subjects, even arts and humanities. Several American states are already using ‘Natural Language Processing’ systems to score student essays through spotting similarities between successful completed tests and new ones.

Standard, mundane tasks are easy pickings for even basic algorithms

Equally, AI could enhance the quality of marking at Universities: OFQUAL, the examinations regulator, is currently exploring the use of AI in spotting ‘outlying’ exam grades in order to highlight human errors in the marking process.

In a system where ‘half’ of UK academics are stressed and ‘40 per cent’ think of quitting, using AI to lighten lecturers’ workload could avoid an imminent and serious shortfall of teachers.

AI could also personalise learning. In an (almost) post-COVID world, blended learning is likely to continue, distancing students from their lecturers. Algorithms such as MIP Politecnico di Milano Graduate School of Business’s ‘FLEXA’ can help to combat this. This programme finds gaps in students’ knowledge and creates a personalised learning timetable for each student with the aim of targeting their weaknesses.

With only 34 per cent of UK students feeling motivated to work during lockdown, this tool could perhaps focus students, and persuade fewer of them to swap Blackboard Collaborate for TikTok during this last year.

Using AI to lighten lecturers’ workload could avoid an imminent and serious shortfall of teachers

Clearly, AI could make the lives of students and lecturers easier. Universities may be peeking into Pandora's box of possibilities: Bristol, for example, already uses several plagiarism-detecting and CV-enhancing AI programmes. But only a toe has been dipped in the AI water. Given its potential, why is the use of AI so limited in the education sector?

Firstly, these technologies come with a hefty price tag. The University’s ‘Turnitin’ software, for example, costs just over £2 per student per year. Multiply this by the twenty thousand or so undergraduates at the University of Bristol and the expense becomes clear. In the context of COVID-induced redundancies, finding money for technological luxuries will be difficult.

Secondly, the British public is suspicious, if not fearful, of AI. A recent survey concluded that 67 per cent of UK adults are concerned about their jobs being replaced by machines. Additionally, popular culture’s fascination with dystopian and doomsday narratives – from Terminator to Transformers – perpetuates distrust. Against this background, integrating AI into our lives becomes an even more challenging task.

Can we trust in a future where robots think and act for themselves?
Building a PC on a student budget

Can we really trust machines to score exams based on historic successful ones, with all the nuance and subjective judgement that this requires? If we can, are we permitting algorithms to decide who succeeds in the future and who does not? To answer ‘yes’, the British public needs more time to deepen their trust of AI.

What does this mean for the future? In the short term, a post-COVID economy is arguably infertile ground for an industry requiring substantial investment and public support. But given its existing use and glaring benefits for today’s generation of students, the issue of AI in education is definitely a question of when, not if.

Featured Image: Epigram / Julia Riopelle (CANVA)


What do you think about having your education increasingly put into the hands of artificial intelligence systems?

Latest