OpenAI’s AGI Czar Quits, Saying the Company Isn’t ready For What It’s Building

“The world is also not ready.”

Future Shock

OpenAI’s researcher in charge of making sure the company (and the world) is prepared for the advent of artificial general intelligence (AGI) has resigned — and is warning that nobody is ready for what’s coming next.

In a post on his personal Substack, the firm’s newly-resigned AGI readiness czar Miles Brundage said quitting his “dream job” after six years has been difficult. He says he’s doing so because he feels a great responsibility regarding the purportedly human-level artificial intelligence he believes OpenAI is ushering into existence.

“I decided,” Brundage wrote, “that I want to impact and influence AI’s development from outside the industry rather than inside.”

When it comes to being prepared to handle the still-theoretical tech, the researcher was unequivocal.

“In short, neither OpenAI nor any other frontier lab is ready,” he wrote, “and the world is also not ready.”

Levels and Levels

After that bold declaration, Brundage went on to say that he’s shared his outlook with OpenAI’s leadership. He added, for what it’s worth, that he thinks “AGI is an overloaded phrase that implies more of a binary way of thinking than actually makes sense.”

Instead of there being some before-and-after AGI framework, the researcher said that there are, to quote many a hallucinogen enthusiast, levels to this shit.

Indeed, Brundage said he was instrumental in the creation of OpenAI’s five-step scale of AI/AGI levels that got leaked to Bloomberg over the summer. On that scale, which ends with AI that can “do the work of an organization,” OpenAI believes the world is currently at the precipice of level two, which would be characterized by AI that has the capability of human-level reasoning.

All the same, Brundage insists that both OpenAI and the world at large remain unprepared for the next-generation AI systems being built.

Notably, Brundage still believes that while AGI can benefit all of humanity, it won’t automatically do so. Instead, the humans in charge of making it — and regulating it — have to go about doing so deliberately. That caveat suggests that he may not think OpenAI is being sufficiently deliberate in how it approaches AGI stewardship.

With the senior researcher’s exit, Brundage says OpenAI is reassigning members of its AGI readiness team to other groups within the organization. This dissolution comes less than six months after it kiboshed its AI safety team, which doesn’t exactly bode well as this latest big-name resignation shakes up the company’s leadership.

More on OpenAI: AI Researcher Slams OpenAI, Warns It Will Become the “Most Orwellian Company of All Time”

Share This Article

Go to Source