An artificial intelligence dubbed Claude, developed by AI research firm Anthropic, got a “marginal pass” on a recent blindly graded law and economics exam at George Mason University, according to a recent blog post by economics professor Alex Tabarrok.
It’s yet another warning shot that AI is experiencing a moment of explosive growth in capability — and it’s not just OpenAI’s ChatGPT that we have to worry about.
Anthropic — which according to Insider secured funding from disgraced crypto exec Sam Bankman-Fried and his alleged romantic partner, former Alameda Research CEO Caroline Ellison — made a big splash with its new AI earlier this week.
Anthropic started quietly testing Claude late last year, and it’s already been hailed as a worthy “rival to ChatGPT,” OpenAI’s AI text generator that has taken the internet by storm.
As of right now, the company is limiting public access to its AI and is only testing it via a closed beta.
But Claude is already impressing academics with its ability to come up with strikingly thorough answers to complex prompts.
For one law exam question highlighted by Tabarrok, Claude was able to generate believable recommendations on how to change intellectual property laws.
“Overall, the goal should be to make IP laws less restrictive and make more works available to the public sooner,” the AI concluded. “But it is important to still provide some incentives and compensation to creators for a limited period.”
Overall, Tabarrok found that “Claude is a competitor to GPT-3 and in my view an improvement,” because it was able to generate a “credible response” that’s “better than many human responses.”
To be fair, others were less impressed with Claude’s efforts.
“To be honest, this looks more like Claude simply consumed and puked up a McKinsey report,” the Financial Times wrote in a piece on Tabarrok’s findings.
While Claude and ChatGPT are similar in terms of user experience, the models were trained in different ways, especially when it comes to ensuring that things don’t go out of hand.
Claude makes use of “constitutional AI,” as described in a yet-to-be-peer-reviewed paper shared by Anthropic researchers last month.
“We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs,” they wrote. “The process involves both a supervised learning and a reinforcement learning phase.”
“Often, language models trained to be ‘harmless’ have a tendency to become useless in the face of adversarial questions,” the company wrote in a December tweet. “Constitutional AI lets them respond to questions using a simple set of principles as a guide.”
Enterprise AI app developer Scale tested both Claude and ChatGPT in a head to head and found that “overall, Claude is a serious competitor to ChatGPT, with improvements in many areas.”
Scale also found that Claude is “more fun than ChatGPT,” even though it was “more inclined to refuse inappropriate requests” thanks to its constitutional AI.
“Its ability to write coherently about itself, its limitations, and its goals seem to also allow it to more naturally answer questions on other subjects,” Scale wrote in its blog, pointing out that ChatGPT still held the edge when it comes to code generation.
In short, Anthropic isn’t messing around with its new AI chatbot and could offer OpenAI some very real competition.
READ MORE: An AI rival to ChatGPT passed a university level law and economics exam, and did better than many humans, professor says [Insider]
More on AI: SEO Spammers Are Absolutely Thrilled Google Isn’t Cracking Down on CNET’s AI-Generated Articles