Untitled

Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California

Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation (APEC) CEO Summit in San Francisco, California, U.S. November 16, 2023. REUTERS/Carlos Barria/File Photo Acquire Licensing Rights

LONDON, Nov 21 (Reuters) – As the European Union edges closer to passing a wide-ranging set of laws governing artificial intelligence, lawmakers and experts say the surprise ousting of OpenAI CEO Sam Altman underscores the need for strict rules.

Altman, cofounder of the startup that last year kicked off the generative AI boom, was abruptly fired by OpenAI’s board last week, sending shockwaves through the tech world and prompting employees to make threats of a mass resignation at the company.

Across the Atlantic, the European Commission, the European Parliament and the EU Council have been hashing out the fine print of the AI Act, a sweeping set of laws that would require some companies to complete extensive risk assessments and make data available to regulators.

In recent weeks, talks have hit stumbling blocks over the extent to which companies should be allowed to self-regulate.

Brando Benifei, one of two European Parliament lawmakers leading negotiations on the laws, told Reuters: “The understandable drama around Altman being sacked from OpenAI and now joining Microsoft (MSFT.O) shows us that we cannot rely on voluntary agreements brokered by visionary leaders.

“Regulation, especially when dealing with the most powerful AI models, needs to be sound, transparent and enforceable to protect our society.”

On Monday, Reuters reported that France, Germany and Italy had reached an agreement on how AI should be regulated, a move expected to accelerate negotiations at the European level.

The three governments support “mandatory self-regulation through codes of conduct” for those using generative AI models, but some experts said this would not be enough.

Alexandra van Huffelen, Dutch minister for digitalisation, told Reuters the OpenAI saga underscored the need for strict rules.

She said: “The lack of transparency and the dependence on a few influential companies in my opinion clearly underlines the necessity of regulation.”

Meanwhile, Gary Marcus, an AI expert at New York University, wrote on social media platform X: “We can’t really trust the companies to self-regulate AI where even their own internal governance can be deeply conflicted.

“Please don’t gut the EU AI Act; we need it now more than ever.”

Reporting by Martin Coulter and Supantha Mukherjee; Editing by Susan Fenton

Our Standards: The Thomson Reuters Trust Principles.

Acquire Licensing Rights, opens new tab

Supantha leads the European Technology and Telecoms coverage, with a special focus on emerging technologies such as AI and 5G. He has been a journalist for about 18 years. He joined Reuters in 2006 and has covered a variety of beats ranging from financial sector to technology. He is based in Stockholm, Sweden. 

Go to Source