By Oscar Zimmerman, Third Year, Physics with Scientific Computing
28 countries and the EU have signed ‘the Bletchley Declaration’, warning against the ‘catastrophic’ dangers of industrial artificial intelligence systems. Signatories include the US and China in what has been called an ‘incredible’ moment by Rishi Sunak.
More than 75 years ago, Alan Turing and his team of cryptanalysts at Bletchley Park made history, not only shortening the Second World War by years, but as an encore inventing the beginnings of the very first computer. Turing has been deemed the father of Computer Science, so fittingly, Bletchley is once again the setting for the next chapter of computing: the historic Global AI Safety Summit.
The summit, called upon by Prime Minister Rishi Sunak, gathered together 28 countries (including China and the US), and the EU on 1 and 2 November. The purpose of this summit was to ‘focus on how to best manage the risks from the most recent advances in AI’ (as defined by the Introduction to the AI SAFETY SUMMIT document - GOV.UK).
Notable guests included Vice President of the US Kamala Harris, the European Commission president Ursula von der Layen and controversial billionaire philanthropist Elon Musk.
Harris spoke about addressing ‘the full spectrum’ of AI, emphasising the US’s stake in all aspects of the evolving technology.
Von der Leyen said on the matter: ‘We are entering a completely different era. We are now at the dawn of an era where machines can act intelligently. My wish for the next five years is that we learn from the past, and act fast!’ The EU is currently in its final stages of passing its own AI Act, and is also considering establishing a European AI Office.
Meanwhile, China was represented by the Chinese vice-minister of Science and Technology, Wu Zhaohui, and gave the statement: ‘We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential.’
The resulting ‘Bletchley Declaration’ was signed by the representatives of all the countries present, and is a clear and historic starting point for global AI control. The document, released the day after the summit concluded, states:
‘Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.’
The declaration focuses on ‘Frontier AI,’ which refers to the most sophisticated software, and what experts are most concerned about.
Despite these promising proclamations, no substantive policies came out of the summit, and many trade unions and civil society organisations have branded the summit a ‘missed opportunity’. Supporting these criticisms were some notable absences: President Joe Biden, Emmanuel Macron and Chancellor Olaf Scholz of Germany. Many argue that without these vital world leaders, the already insubstantial agreements and proclamations are made all the more toothless.
The summit has been called by many to be Sunak’s attempt to position the UK as a global leader on the AI discussion. While the talks put the UK in the frontlines on a global scale, when it comes to at home AI policy, the British government has been restrained. Sunak’s position regarding AI at the end of the summit is ‘not to rush but to relegate…How can we write laws that make sense for something we don’t yet fully understand?’ For the time being while President Biden announced an executive order which requires AI companies to assess national security when releasing their software, the government at home is purposefully sitting on their hands policy-wise, not making any changes to existing laws and regulations for the time being.
Much of the UK’s leading research into AI is being conducted at many of the top universities, including right here in Bristol. The government has announced the start of the AI Safety Institute, which will evaluate and test new models, and regarding academic research, they state: ‘The Institute will establish partnerships with leading academics and civil society organisations in the UK and beyond…to leverage the expertise of the UK’s world-leading researchers.’
Nonetheless, the main result of the Global AI Safety Summit is surely a symbolic one, and a first step in setting global policy and restrictions. A second summit has already been scheduled in six months in South Korea and another in France in a year.
Featured image: Flickr / UK Government
Do you think the UK can take the lead on AI?