OpenAI Confused by Why People Are So Impressed With ChatGPT

“Like, honestly, we don’t understand. We don’t know.”

Overwhelmed

Impressed by OpenAI’s viral chatbot, ChatGPT? Cool — but the folks over at OpenAI aren’t really sure why.

“It’s been overwhelming, honestly,” Jan Lieke, leader of OpenAI’s alignment team, told the MIT Technology Review. “I would love to understand better what’s driving all of this — what’s driving the virality.”

“Like, honestly, we don’t understand,” he added. “We don’t know.”

Lieke isn’t the only OpenAI-er who feels this way. Even company CEO Sam Altman, has publicly disparaged ChatGPT in the press, calling it a “terrible product.”

Going Mainstream

Several other OpenAI figures — company cofounder John Schulman, policy researcher Sandhini Agarwal, and AI research scientist Liam Fedus — joined the chorus.

“I expected it to be intuitive for people, and I expected it to gain a following,” Schulman told MIT, “but I didn’t expect it to reach this level of mainstream popularity.”

“We were definitely surprised how well it was received,” mused Fedus, with Agarwhal adding that “we work on these models so much, we forget how surprising they can be for the outside world sometimes.”

Hunk o’ Junk

Agarwhals’s quip seems to hit the nail on the head. Though ChatGPT was only released a few months ago, the technology behind it has actually been around for some time now.

The large language model (LLM) it was based on called GPT-3.5 and its predecessors have been publicly available for a while.

But the folks at OpenAI clearly weren’t able to predict the chaos that ensued following the public release of ChatGPT. After all, these language models are notoriously unpredictable, forcing the company to roll with the punches.

It’s “very difficult to really anticipate what the real safety problems are going to be with these systems once you’ve deployed them,” Lieke told MIT. “So we are putting a lot of emphasis on monitoring what people are using the system for, seeing what happens, and then reacting to that.”

“This is not to say that we shouldn’t proactively mitigate safety problems when we do anticipate them,” he added. “But yeah, it is very hard to foresee everything that will actually happen when a system hits the real world.”

READ MORE: The inside story of how ChatGPT was built from the people who made it [MIT Technology Review]

More on OpenAI bashing ChatGPT: The CEO of OpenAI Says ChatGPT Is a “Horrible Product”

Share This Article

Go to Source