Around 30 activists gathered near the entrance to OpenAI’s San Francisco office earlier this week, Bloomberg reports, calling for an AI boycott in light of the company announcing it was working with the US military.
Last month, the Sam Altman-led company quietly removed a ban on “military and warfare” from its usage policies, a change first spotted by The Intercept.
Days later, OpenAI confirmed it was working with the US Defense Department on open-source cybersecurity software.
Holly Elmore, who helped organize this week’s OpenAI protest, told Bloomberg that the problem is even bigger than the company’s questionable willingness to work with military contractors.
“Even when there are very sensible limits set by the companies, they can just change them whenever they want,” she said.
OpenAI maintains that in spite of its obvious flexibility around rules, it still has a ban in place against having its AI be used to build weapons or harm people.
During a Bloomberg talk at the World Economic Forum in Davos, Switzerland last month, OpenAI VP of global affairs Anna Makanju argued that its collaboration with the military “very much aligned with what we want to see in the world.”
“We are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on,” an OpenAI spokesperson told The Register at the time.
OpenAI’s quiet policy reversal hasn’t sat well with organizers of this week’s demonstration.
Elmore leads US operations for a community of volunteers called PauseAI, which is calling for a ban on the “development of the largest general-purpose AI systems,” due to their potential of becoming an “existential threat.”
And PauseAI isn’t alone in that. Even top AI executives have voiced concerns over AI becoming a considerable threat to humanity. Polls have recently found that a majority of voters also believe AI could accidentally cause a catastrophic event.
“You don’t have to be a genius to understand that building powerful machines you can’t control is maybe a bad idea,” Elmore told Bloomberg. “Maybe we shouldn’t just leave it up to the market to protect us from this.”
Altman, however, believes the key is to proactively develop the technology in a safe and responsible way, instead of opposing the concept of AI entirely.
“There’s some things in there that are easy to imagine where things really go wrong,” he said during the World Governments Summit in Dubai this week. “And I’m not that interested in the killer robots walking on the street direction of things going wrong.”
“I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong,” he added.
To Altman, who has clearly had enough of people calling for a pause on AI, it’s a very simple matter.
“You can grind to help secure our collective future or you can write Substacks about why we are going fail,” he tweeted over the weekend.
More on OpenAI: Sam Altman Seeking Trillions of Dollars for New AI Venture
Share This Article