WASHINGTON, Sept 13 (Reuters) – American technology leaders including Tesla (TSLA.O) CEO Elon Musk, Meta Platforms (META.O) CEO Mark Zuckerberg and Alphabet (GOOGL.O) CEO Sundar Pichai met with lawmakers at Capitol Hill on Wednesday for a closed-door forum that focused on regulating artificial intelligence.
Lawmakers are grappling with how to mitigate the dangers of the emerging technology, which has experienced a boom in investment and consumer popularity since the release of OpenAI’s ChatGPT chatbot.
“It’s important for us to have a referee,” Musk told reporters, adding that a regulator was needed “to ensure that companies take actions that are safe and in the general interest of the public.”
New Jersey Senator Cory Booker praised the discussion, saying all the participants agreed “the government has a regulatory role” but crafting legislation would be a challenge.
Lawmakers want safeguards against potentially dangerous deepfakes such as bogus videos, election interference and attacks on critical infrastructure.
“Today, we begin an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass,” U.S. Senate Majority Leader Chuck Schumer, a Democrat, said in opening remarks. “Congress must play a role, because without Congress we will neither maximize AI’s benefits, nor minimize its risks.”
Other attendees included Nvidia (NVDA.O) CEO Jensen Huang, Microsoft (MSFT.O) CEO Satya Nadella, IBM (IBM.N) CEO Arvind Krishna, former Microsoft CEO Bill Gates, and AFL-CIO labor federation President Liz Shuler.
Schumer, who discussed AI with Musk in April, said attendees would talk “about why Congress must act, what questions to ask, and how to build a consensus for safe innovation.”
In March, Musk and a group of AI experts and executives called for a six-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society.
This week, Congress is holding three separate hearings on AI. Microsoft President Brad Smith told a Senate Judiciary subcommittee on Tuesday that Congress should “require safety brakes for AI that controls or manages critical infrastructure.”
Republican Senator Josh Hawley questioned Wednesday’s closed-door session, saying Congress has failed to pass any meaningful tech legislation. “I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money,” Hawley said.
Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and generate images whose artificial origins are virtually undetectable.
Adobe (ADBE.O), IBM, Nvidia and five other companies on Tuesday said they had signed President Joe Biden’s voluntary AI commitments, which require steps such as watermarking AI-generated content.
The commitments, which were announced in July, are aimed at ensuring AI’s power is not used for destructive purposes. Google, OpenAI and Microsoft signed on in July. The White House has also been working on an AI executive order.
Reporting by David Shepardson; additional reporting by Mike Stone; editing by Jonathan Oatis and Rosalba O’Brien
Our Standards: The Thomson Reuters Trust Principles.