The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Ban on Advanced AI
Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel Prize winners to push for a complete ban on creating artificial superintelligence.
The royal couple are among the signatories of a influential declaration that calls for “a ban on the creation of artificial superintelligence”. Artificial superintelligence (ASI) refers to AI systems that would surpass human intelligence in every intellectual area, though this technology remain theoretical.
Primary Requirements in the Declaration
The statement insists that the ban should stay active until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “substantial public support” has been secured.
Prominent figures who added their signatures include technology visionary and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of modern AI, Yoshua Bengio; tech entrepreneur Steve Wozniak; UK entrepreneur Richard Branson; former US national security adviser; former Irish president Mary Robinson, and UK writer a public intellectual. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, John C Mather, and Daron Acemoğlu.
Organizational Background
The declaration, aimed at national leaders, technology companies and lawmakers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a pause in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made artificial intelligence a global political talking point.
Industry Perspectives
In recent months, Meta's CEO, the leader of the social media giant, one of the leading tech companies in the US, claimed that development of superintelligence was “approaching reality”. However, some experts have argued that discussions about superintelligence reflects competitive positioning among technology firms investing enormous sums on artificial intelligence recently, rather than the sector being close to achieving any scientific advancements.
Possible Dangers
However, FLI warns that the prospect of ASI being developed “within the next ten years” presents numerous risks ranging from replacing human workers to erosion of personal freedoms, exposing countries to security threats and even threatening humanity with extinction. Deep concerns about AI focus on the potential ability of a system to escape human oversight and protective measures and initiate events contrary to human interests.
Public Opinion
FLI released a US national poll showing that about 75% of US citizens want robust regulation on advanced AI, with six out of 10 believing that artificial superintelligence should not be created until it is demonstrated to be secure or controllable. The poll of 2,000 US adults added that only 5% supported the status quo of fast, unregulated development.
Industry Objectives
The leading AI companies in the United States, including the conversational AI creator a major AI lab and Google, have made the creation of human-level AI – the hypothetical condition where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their work. Although this is slightly less advanced than ASI, some specialists also warn it could carry an extinction threat by, for example, being able to improve itself toward achieving superintelligence, while also presenting an underlying danger for the modern labour market.