ChatGPT Is Hilariously Bad at Generating Random Numbers

How hard can it be?

Ran-Dumb

ChatGPT may be an eloquent speaker, bullshit artist, and purveyor of misinformation — but a mathematician it is not.

Yes, you may be familiar with accounts of people convincing ChatGPT that 2+2=5, but OpenAI’s chatbot has other, more subtle ways that it will screw up simple, math-related tasks that could easily go unnoticed.

One example? Generating random numbers. According to the findings of Colin Fraser, a data scientist at Facebook-turned-Meta, ChatGPT’s idea of a random number seems to be less actually random and more a human’s idea of random.

Too Human

In his testing, Fraser prompted ChatGPT to pick a random number between 1 and 100, and collected 2,000 separate responses. Look at the graph of the distribution of the returned numbers, and it’s immediately clear that there are some outliers.

“ChatGPT really likes 42 and 7’s,” Fraser quipped in a tweet.

Indeed, the number 42 is astoundingly overrepresented, comprising around a whopping ten percent of all 2,000 responses. Its towering above all other numbers is likely no coincidence, as any nerd or netizen can tirelessly tell you it’s the answer to the “ultimate question of life, the universe, and everything,” per Douglas Adams’ smash hit novel “The Hitchhiker’s Guide to the Galaxy.”

Simply put, 42 is a meme number online much in the same way that 69 is, which demonstrates that ChatGPT is not, in fact, serving as a random number generator and is instead simply reflecting popular numbers chosen by humans in its vast dataset gleaned from the web — although that decidedly tired sex number is strangely underrepresented here, not even passing the one percent mark, suggesting that it may have been manually suppressed.

The other overrepresented number is 7, mirroring humans’ own fondness for the digit. Numbers between 71 and 79 are curiously prominent, with seven frequently appearing as the second digit in numbers beyond this range, too.

Interestingly, Fraser also found that GPT-4 seems to overcompensate for this by returning random numbers too uniform in distribution.

Meme Machine

Overall, none of this reveals some secret nature of ChatGPT. After all, it is a large language model, basically predicting plausible responses rather than actually “thinking” up an answer or sentence.

Still, it’s a little dumb that a hyped up chatbot touted as the future of almost everything can’t do basic, common tasks. Ask it to plan a road trip for you, and it’ll have you make a pitstop at a town that doesn’t exist.

Or, in this case, prompt it for a random number and there’s a good chance it’ll make its decision based on the popularity of a meme.

All of which raises the question: what’s the point if ChatGPT simply ends up regurgitating trite fragments of our online monoculture?

More on ChatGPT: OpenAI’s Next-Generation AI Is About to Demolish Its Competition

Share This Article

Go to Source