article with video

SummaryCompaniesAttendees agree need for pre-release model testingGovts including China signed declaration on WednesdayThey agreed to work together to tackle AI safetySunak to meet Elon Musk after summit concludes

BLETCHLEY PARK, England, Nov 2 (Reuters) – Leading AI developers agreed to work with governments to test new frontier models before they are released, to help manage the risks of the rapidly developing technology, in a potentially landmark achievement at the UK’s artificial intelligence summit.

Some tech and political leaders have warned that AI poses huge risks if not controlled, ranging from eroding consumer privacy to danger to humans and causing a global catastrophe, and these concerns have sparked a race by governments and institutions to design safeguards and regulation.

At an inaugural AI Safety Summit at Bletchley Park, home of Britain’s World War Two code-breakers, political leaders from the United States, European Union and China agreed on Wednesday to share a common approach to identifying risks and ways to mitigate them.

On Thursday, British Prime Minister Rishi Sunak said the United States, EU and other “like-minded” countries had also agreed with a select group of companies working at AI’s cutting edge on the principle that models should be rigorously assessed before and after they are deployed.

Yoshua Bengio, named as a Godfather of AI, will help deliver a “State of the Science” report to build a shared understanding of the capabilities and risks ahead.

“Until now the only people testing the safety of new AI models have been the very companies developing it,” Sunak said in a statement. “We shouldn’t rely on them to mark their own homework, as many of them agree.”

THE WAY FORWARD

The summit has brought together around 100 politicians, academics and tech executives to plot a way forward for a technology that could transform the way companies, societies and economies operate, with some hoping to establish an independent body to provide global oversight.

In a first for Western efforts to manage AI’s safe development, a Chinese vice minister joined other political leaders on Wednesday at the summit, focused on highly capable general-purpose models called “frontier AI”.

Wu Zhaohui, China’s vice minister of science and technology, signed a “Bletchley Declaration” on Wednesday but China was not present on Thursday and did not put its name to the agreement on testing.

Sunak had been criticised by some lawmakers in his own party for inviting China, after many Western governments reduced their technological cooperation with Beijing, but Sunak said any effort on AI safety had to include its leading players.

He also said it showed the role Britain could play in bringing together the three big economic blocs of the United States, China and the European Union.

“It speaks to our ability to convene people, to bring them together,” Sunak said at a press conference. “It wasn’t an easy decision to invite China, and lots of people criticised me for it, but I think it was the right long-term decision.”

Representatives of Microsoft-backed OpenAI, Anthropic, Google DeepMind, Microsoft (MSFT.O), Meta (META.O) and xAI attended sessions at the summit on Thursday, alongside leaders including European Commission President Ursula von der Leyen, U.S. Vice President Kamala Harris and U.N. Secretary-General António Guterres.

The EU’s von der Leyen said complex algorithms could never be exhaustively tested, so “above all else, we must make sure that developers act swiftly when problems occur, both before and after their models are put on the market”.

The final words on AI from the two days will be a conversation between Sunak and billionaire entrepreneur Elon Musk, due to be broadcast later on Thursday on Musk’s X, the platform previously known as Twitter.

According to two sources at the summit, Musk told fellow attendees on Wednesday that governments should not rush to roll out AI legislation.

Instead, he suggested companies using the technology were better placed to uncover problems, and they could share their findings with lawmakers responsible for writing new laws.

“I don’t know what necessarily the fair rules are, but you’ve got to start with insight before you do oversight,” Musk told reporters on Wednesday.

Reporting by Paul Sandle and Martin Coulter; Additional reporting by William James and Jan Strupczewski; Editing by Emelia Sithole-Matarise and Susan Fenton

Our Standards: The Thomson Reuters Trust Principles.

Acquire Licensing Rights, opens new tab
Go to Source