“It has destroyed me completely.”
Gut Wrenching
Four former content moderators have filed a petition with the Kenyan government asking for an investigation into the content moderation company Sama, on grounds that workers were subjected to exploitative and harmful conditions during an eight-month contract with OpenAI between 2021 and 2022, The Guardian reports.
In the filing, per the Guardian, former ChatGPT moderators say they weren’t properly warned about the brutality of the content that they would be tasked to moderate — and then paid between $1.46 and $3.74 to review “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest” during the training of OpenAI products.
Then, when Sama’s contract with OpenAI ended suddenly, the workers say they were given no support to deal with the lasting psychological effects of their moderation labor.
“It has really damaged my mental health,” 27-year-old Mophat Okinyi, one of the four plaintiffs in the suit, told the Guardian of his experience doing moderation work for the OpenAI-contracted firm Sama. After reading graphic texts about rape, Okinyi told the newspaper, he became paranoid and withdrawn; his pregnant wife ultimately left him, saying he was a changed man.
“I lost my family,” Okinyi told the Guardian.
Invisible Human Infrastructure
Twenty-eight-year-old petitioner Alex Kairu told the Guardian a similar story: that as a result of ChatGPT moderation work, his relationship with his wife has suffered, and he’s been forced to move back in with his parents.
“It has destroyed me completely,” Kairu told the newspaper.
Specifically, the former ChatGPT moderators are asking for written government regulation regarding the outsourcing of “harmful and dangerous technology work” to Kenya, according to the Guardian. They’re also asking that the government rewrites existing legislation to “include the exposure to harmful content as an occupation hazard,” and per the newspaper, are additionally calling for an investigation into the country’s ministry of labor for a failure to protect the country’s youth against abuse by outsourcing firms.
For its part, Sama — which has also worked with big tech players including Facebook, Google, and Microsoft — has generally denied allegations of wrongdoing, arguing that it offered workers living wages and wellness support. It also maintains that after the contract with OpenAI suddenly ended, employees were offered roles in other moderation projects.
But this isn’t Sama’s first time in hot water, and to that end, let this be a reminder that the burgeoning AI industry is built upon a vast infrastructure of human labor, which often amounts to distressing and exploitative work. Contractors need to do better — but most importantly, so do the billion-dollar companies like OpenAI that hire them.
More on Sama and OpenAI: OpenAI Apparently Paid People in the Developing World $2/Hour to Look at the Most Disturbing Content Imaginable
Share This Article