Scientists Gave AI an “Inner Monologue” and Something Fascinating Happened

This model may “close the gap between language model and human-like reasoning capabilities,” researchers hope.

Therefore AI Am

If you give an AI an inner monologue, it apparently starts teaching itself to be smarter.

In a not-yet-peer-reviewed paper researchers from Stanford and a group calling itself “Notbad AI” have teamed up to create an AI model that pauses to “think” before spitting out answers, shows its work, and asks users to tell it which response is most correct.

The team behind the Quiet Self-Taught Reasoner, or Quiet-STaR for short, wanted their model to not only be able to teach itself to reason — which they achieved in 2022 with the original Self-Taught Reasoner algorithm — but also to do so “quietly” before providing answers to prompts, thus operating like a human’s inner monologue that, ideally, runs before we speak.

“Excitingly,” as Stanford’s Eric Zelikam enthused in an X-formerly-Twitter thread about the new model he helped produce, “self-teaching reasoning on diverse web text automatically improves other reasoning!”

If You Build It

To create this contemplative AI, the research team built Quiet-STaR on Mistral 7B, an open-source large language model (LLM) that is, according to the Hugging Face AI community, trained on seven billion parameters and said to be able to outperform the latest version of Meta’s Llama model.

Quiet-STaR was programmed, essentially, to show its work when giving reasoning for its outputs, and users of the model were then able to select which response was most accurate. This approach, as the paper notes, resulted in the model being accurate 47.2 percent of the time — which isn’t particularly impressive, but an improvement from the 36.3 percent it got without the additional reasoning training.

While the model still performed abysmally low on math, getting only 10.9 percent of the questions right, the pre-trained Quiet-STaR only got 5.9 percent of the questions right, which means that it doubled its math prowess during its training.

None of these results are blowing us away. But they’re intriguing because, to date, chatbots like OpenAI’s ChatGPT and Google’s Gemini have been terrible at common-sense reasoning. Quiet-STaR, the researchers propose in their paper, could lead to leaps that “close the gap between language model and human-like reasoning capabilities.”

Could that sort of thing be what OpenAI’s sitting on with its mysterious and shockingly similarly-sounding Q* (pronounced “queue star”) model? Only time will tell.

More on AI advances: State Department Report Warns of AI Apocalypse, Suggests Limiting Compute Power Allowed for Training

Share This Article

Go to Source