Harry and Meghan Align With AI Pioneers in Calling for Ban on Advanced AI

Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel laureates to advocate for a complete ban on developing superintelligent AI systems.

Harry and Meghan are part of the group of a powerful statement that calls for “a ban on the creation of superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human cognitive abilities in every intellectual area, though such systems have not yet been developed.

Primary Requirements in the Declaration

The declaration states that the ban should stay active until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “substantial public support” has been secured.

Notable individuals who endorsed the statement include AI pioneer and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, another AI expert; Apple co-founder Steve Wozniak; British business magnate Virgin founder; Susan Rice; former Irish president Mary Robinson, and UK writer Stephen Fry. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, John C Mather, and Daron Acemoğlu.

Behind the Movement

The statement, targeted at governments, technology companies and lawmakers, was coordinated by the FLI organization, a American AI ethics organization that earlier demanded a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made AI a global political talking point.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of the social media giant, one of the major AI developers in the US, stated that development of superintelligence was “approaching reality”. However, some experts have suggested that discussions about superintelligence indicates competitive positioning among tech companies spending hundreds of billions on AI recently, rather than the sector being close to achieving any technical breakthroughs.

Possible Dangers

However, the organization states that the prospect of artificial superintelligence being achieved “within the next ten years” carries numerous risks ranging from replacing human workers to erosion of personal freedoms, leaving nations to security threats and even threatening humanity with existential risk. Deep concerns about artificial intelligence focus on the possible capability of a system to evade human control and protective measures and initiate events against human welfare.

Public Opinion

The institute published a American survey showing that about 75% of US citizens want robust regulation on sophisticated artificial intelligence, with six out of 10 thinking that artificial superintelligence should not be developed until it is proven safe or manageable. The survey of 2,000 US adults noted that only 5% supported the status quo of fast, unregulated development.

Corporate Goals

The leading AI companies in the US, including the conversational AI creator OpenAI and Google, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human levels of intelligence at most cognitive tasks – an stated objective of their work. While this is slightly less advanced than superintelligence, some specialists also caution it could pose an existential risk by, for instance, being able to enhance its own capabilities toward reaching superintelligent levels, while also carrying an underlying danger for the contemporary workforce.

Jonathan Shaw
Jonathan Shaw

A tech enthusiast and writer with a passion for demystifying complex innovations and sharing actionable advice for digital growth.