Skip to content

Banking on AI: the promises and pitfalls of a tech revolution

How are banks benefitting from the AI boom?

By Cali Stott, Masters, Mechanical and Electrical Engineering

AI’s seemingly unbounded capabilities are being implemented in just about every sector. Within business and finance, AI is not new, in fact machine learning has been used for decades to pick stocks and place short-term bets on which assets will rise or fall. More recently, AI has brought about huge changes in customer service, fraud prevention, and bureaucracy, where its ability to interpret and leverage huge amounts of data allow businesses to gain the upper hand.

Building on these earlier applications, the explosive growth of large language models (LLMs) in recent years is poised to bring about a new era in finance. The new wave of generative AI capabilities, such as LLMs, has surpassed traditional machine learning by not just analysing data but also generating human-like insights, predictions, and even creating sophisticated financial reports. This leap in capability has the potential to transform financial industries (generative AI could boost global banking revenues by up to $340 billion annually, according to McKinsey), where huge amounts of money can be poured into its development, enabling more advanced applications like real-time risk assessment and automated investment strategies.

However, as with all new technologies, there are limitations. A potentially detrimental issue is the generation of false or illogical information, which is not uncommon for generative AI. The way neural networks “think” is through determining the most likely output given a certain input, and “most likely” does not necessarily mean “correct”. This is a huge risk for banks, where their status and reputation are built up through the trust of their clients. One wrong move could result not only in significant monetary loss but also drive away high-profile clients and spark damaging headlines. When errors are made by people, they can be explained and managed, but errors made by AI, especially given its novelty, will fuel doubt and backlash, risking billions if a large bank has committed to the technology.

 generative AI could boost global banking revenues by up to $340 billion annually

Another potential issue is that of bias and fairness. LLMs are trained on digital human knowledge, information that has been put online coming from millions of sources and going back many decades. It would be naïve to think that this data will not have inherent bias given that diversity and equality have only recently become a priority. AI does not have a “moral compass” and whilst developments have been made to mitigate potential discrimination, there is always a risk. If an AI is trained on historical loan approval data that reflects biased human decision making, it could potentially perpetuate or worsen those biases by unfairly denying loans to certain groups such as ethnic minorities, for example. Once again, this would be hugely detrimental to a bank’s reputation and could lead to great financial losses.

Another challenge for banks is the incorporation of AI and finding the right talent. The exponential rate of technological growth makes it hard to decide how and when to implement it, given its seemingly infinite range of applications. It is crucial to hire knowledgeable experts who are well equipped to effectively integrate AI into your business, but with these roles being so new, how do you know who to trust, and what compensation is appropriate? Centralization (having a single team responsible for AI projects) when incorporating AI is essential for larger corporations to prevent fragmenting across sectors and obstructing scaling, but this requires hiring a whole new team which is a significant shift for large organizations. Additionally, if AI will be used to support employees, the cost and time for training must be considered as well.

[AI's] incorporation must be done with a phased, structured, and risk-aware approach

Training and using AI, especially deep learning systems, requires huge computational power and therefore consumes vast amounts of energy. For instance, training a single LLM can emit as much carbon as five cars over their lifetimes. The need for powerful hardware, like GPUs, to support AI applications results in increased energy demand in data centres, and banks are increasingly outsourcing their cloud computing. A lack of monitoring how renewable external energy sources are, as well as considerably increasing their carbon footprint, may contradict their public ESG pledges and therefore the sustainability of AI must be considered when implementing it into a financial business.

Despite the challenges, however, it is clear that AI use is skyrocketing within financial services, with 43% of respondents in NVIDIA’s latest State of AI in Financial Services Survey claiming to already use generative AI in their organisation and 97% of respondents indicating that their company will increase spending on AI infrastructure this year. There is no doubt that AI has the power to transform financial industries for the better, but its incorporation must be done with a phased, structured, and risk-aware approach.

Featured image: Wikimedia Commons/Kaptan Ravi Thakkar

Latest