Skip to content

Whose morals matter? Programming the ethical compass

As AI gets integrated into our lives, Thiara de Alwis has been thinking about which moral code AI follows, and who gets to decide.

By Thiara de Alwis, Computer Science, Third Year

Who gets to decide what AI thinks is moral? Is there a better way of programming in ethics than a 'first come' approach?

For the first time ever, AI is no longer confined to research labs and sci-fi movies, it’s being woven into our everyday lives, fully accessible to anyone with a screen and a question.

We know from scandals such as Tay (the twitter-bot that turned racist after interacting with users of the social media site) that AI is built to reflect the information we give it, and not always in a positive way. With every decision made and question asked, it studies not only our intelligence but our biases, prejudices, and morals. The policies and laws we put in place today will shape what kind of AI we live with tomorrow, and determine whose interests, values and worldviews define our future.

A recent Nature study on AI regulation found that different countries prioritise different values, reflecting their unique cultures. For example, US based groups are focused on “advancing foundational AI and diverse applications”, while the EU AI strategy prioritises “social mobility, sustainability, standardization, and democratic governance of AI.”

If every country designs its AI tools based on its own moral standards, what happens when these systems go global? 

As AI models and tools spread worldwide, the ethical framework of their country of origin inevitably influences global practices - and we run the risk of a kind of digital cultural imperialism. The beliefs of one country begin to shape the lives of people in others, simply because their tools spread the fastest or the widest. 

It isn’t just nationalities either - AI development is dominated by white, male voices. Although encouragingly in recent years we have seen such spaces beginning to diversify, the homogeneity of programmers in AI ultimately limits the diversity of the models and datasets themselves. If the creators of such tools are limited to one particular worldview, how can we expect their creations to account for the needs, beliefs and values of others? 

Fortunately, there seems to be some common ground across regulators. A study by the AI Ethics Robotics Society found that transparency, justice and non-maleficence were the three most commonly emphasised principles in AI regulation.  More interesting though, is the development of these principles over time. The study found that in 2014, the top-cited principles were fairness and reliability, in 2016 accountability, and in 2018 transparency - the shift appearing to reflect the public’s growing concern over unethical AI use. This could provide us with the option to ground AI in the principles which resonate most with the public - basing regulations on the views of the majority. After all, it makes sense for the AI systems we use every day to reflect the preferences of the people using them. 

On the other hand, we could look further back, drawing from classical ethical theories that have long surrounded debates of responsibility and morality. By drawing from historical ethics, we can establish a more reliable foundation for AI regulation. 

Take the classic Trolley problem: “you can save 5 people in danger of being hit by a trolley by diverting the trolley to kill just 1 person. Do you do it?”.

Classically, this problem has been explored through different ethical perspectives. For example, Deontology, a framework which focuses on following moral rules, would argue that by diverting the trolley you are actively committing an immoral act and so should not. Whereas Utilitarianism, which maximises overall good, would prioritise the well-being of the majority, compelling you to divert the trolley to cause the least damage. Now imagine that the trolley is a self-driving car, or a predictive policing program, or even a prioritisation algorithm in a hospital deciding which patients should be treated. Which decision should the AI make? And who is responsible for that decision - the developers, the legislators or the AI model itself?

As AI intertwines itself with key issues of economics, politics and culture - factors unique to each country - it becomes impossible to build a one-size-fits-all framework. It becomes apparent that our approach towards AI ethics must be as fluid and evolving as the technology itself.


Featured images: Corin Hadley/ Procreate

Latest