The Duke and Duchess of Sussex Join Tech Visionaries in Calling for Prohibition on Superintelligent Systems

The Duke and Duchess of Sussex have teamed up with AI experts and Nobel Prize winners to advocate for a complete ban on developing superintelligent AI systems.

Harry and Meghan are among the signatories of a influential declaration that calls for “a prohibition on the creation of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human cognitive abilities in every intellectual area, though this technology have not yet been developed.

Primary Requirements in the Statement

The statement states that the prohibition should stay active until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been achieved.

Prominent figures who endorsed the statement include technology visionary and Nobel laureate Geoffrey Hinton, along with his fellow “godfather” of modern AI, Yoshua Bengio; Apple co-founder Steve Wozniak; British business magnate Virgin founder; former US national security adviser; former Irish president an international leader, and British author a public intellectual. Other Nobel laureates who signed include a peace advocate, a physics Nobelist, John C Mather, and an economics expert.

Behind the Movement

The declaration, aimed at governments, tech firms and lawmakers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that earlier demanded a pause in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made artificial intelligence a worldwide public talking point.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of the social media giant, one of the major AI developers in the United States, stated that development of superintelligence was “approaching reality”. Nevertheless, some experts have suggested that discussions about superintelligence reflects competitive positioning among tech companies spending hundreds of billions on artificial intelligence recently, rather than the industry being close to achieving any technical breakthroughs.

Possible Dangers

Nonetheless, the organization states that the possibility of ASI being achieved “in the coming decade” carries numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, exposing countries to security threats and even threatening humanity with extinction. Deep concerns about AI focus on the potential ability of a system to escape human oversight and protective measures and initiate events against human welfare.

Citizen Sentiment

The institute released a US national poll showing that approximately three-quarters of US citizens want strong oversight on sophisticated artificial intelligence, with 60% thinking that superhuman AI should not be created until it is proven safe or manageable. The poll of American respondents noted that only a small fraction backed the status quo of fast, unregulated development.

Corporate Goals

The top artificial intelligence firms in the US, including the conversational AI creator a major AI lab and Google, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at many intellectual activities – an stated objective of their research. While this is slightly less advanced than superintelligence, some experts also warn it could pose an extinction threat by, for instance, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an implicit threat for the contemporary workforce.

Scott Horn
Scott Horn

A passionate tech writer and software engineer with over a decade of experience in the industry.