
A new AI startup formed from the cast-offs of some of the industry’s biggest, most ethically dubious players says it’s definitely pursuing building the tech with humanity’s best interests in mind.
Called Humans& — that’s its actual name, not a typo — the outfit’s founders say their vision is to build AI that empowers people instead of trying to replace their jobs, the New York Times reports, such as by building collaboration software that works like an “AI version of an instant messaging app.”
This nonpareil vision, according to the reporting, has helped Humans& to already raise $480 million in seed funding despite launching just three months ago. Powering its $4.48 billion valuation, its investors include Nvidia, Jeff Bezos, and Google Ventures — moral paragons of human empowerment all.
One of the founders is Andi Peng, a research scientist at Anthropic, which builds the chatbot Claude, and whose CEO Dario Amodei casually predicted that the tech he was helping develop would replace over half of all entry-level white collar jobs. At some point, Peng began to no longer see philosophically eye to eye with her former employer.
“Anthropic is training its model to work autonomously. It loved to highlight how its models churned for eight hours, 24 hours, 50 hours by itself to complete a task,” Peng told the NYT. “That was never my motivation. I think of machines and humans as complementary.”
Two others founders, Eric Zelikman and Yuchen He, are from Elon Musk’s xAI, where they helped build Grok, née “MechaHitler,” a chatbot which so regularly thrusts itself into controversy that it somehow barely registered as newsworthy earlier this month when it generated thousands of nonconsensual — and likely illegal — AI nudes of women and children. Now manumitted from the shackles of Musk leadership, Zelikman is imagining an AI beyond such chatbots as Grok.
“AI has enormous potential to allow people to do more together,” Zelikman told the NYT. “The current paradigm — questioning and answering — is not going to get us there.”
Generally, tech companies don’t say they’re building AI to disenfranchise the working human, dressing up what possibly lies ahead with euphemisms like “boosting productivity.” Still, they do acknowledge the tech’s disruptive effects, if only as a qualifier for how awesome AI will be in the long run. Along with Anthropic’s Amodei, OpenAI Sam Altman predicts that AI will wipe out entire categories of labor. Microsoft chief Satya Nadella expresses his anxiety that AI could make his entire company obsolete at the same time he brags about 30 percent of the company’s code being written with AI. Meanwhile, these companies’ customers openly brag about replacing employees with their AI models.
The barely disguised soullessness with which AI leaders operate, perhaps, has provided a window for entrepreneurs to peddle a more humane message, in good faith or not in good faith, naively or cynically.
“There are some pretty clear principles that designers are beginning to take that favor human-centered approaches,” Ben Shneiderman, a computer science professor at the University of Maryland and an advocate of “human centric” AI, told the NYT. “It goes back to the fundamental principle of ensuring human control.”
More on AI: The CEO of Microsoft Suddenly Sounds Extremely Nervous About AI