MAKING CHATGPT SOCIALLY ACCEPTABLE WAS SOUL-DESTROYING WORK FOR HUMAN TRAINERS

MAKING CHATGPT SOCIALLY ACCEPTABLE WAS SOUL-DESTROYING WORK FOR HUMAN TRAINERS

Beneath ChatGPT’s ability to converse easily on an Internet’s worth of subjects lies the psychological wreckage of scores of Kenyan trainers: employed by a contract labor firm, squads of low-wage workers in this East African country spent thousands of hours teaching the chatbot to not talk about subjects such as bestiality, child rape, and torture.

ChatGPT can scour the web to assemble information on virtually any topic. However, what it returns to the user depends on the instructions or “prompts” it’s given. A poorly chosen or ill-phrased prompt could send early versions of the chatbot off into racist, misogynist, sexually perverse, or similar rants.

This chatbot is ready for polite company because those Kenyan workers spent months goading it into delivering the most vile and offensive responses of which it was capable. ChatGPT’s creators then were able to create filters that screen out such indiscretions from ChatGPT’s universe of information.  

The Kenyan contractors spent months wading through descriptions of rape, child sexual abuse, self-harm, bestiality, and various other forms of mayhem and violence, The Wall Street Journal reported.

The workers led ChatGPT through four successive refinements, each time using a different or more sophisticated method to look into its dark corners and blot them out.

“My experience in those four months was the worst experience I’ve ever had in working in a company,” Alex Kairu, one of the Kenya workers, said in a WSJ interview.

One worker reported reviewing hundreds of passages a day describing people stabbing themselves or killing themselves using “unspeakable methods.” He began having nightmares and became socially isolated. When he sees a fork now, he told the WSJ, he sees a weapon.

Another worker was part of a team contracted to review 15,000 posts a month related to parents raping their children, children having forced sex with animals, sexual slavery, and similar horrors. He said the work destroyed his family and left him traumatized and suffering from anxiety and depression.

For their work, the reviewers were paid from $1.46 to $3.72 an hour. The wage was determined by “an internationally recognized methodology for determining a living wage,” the contract labor firm that employed them said.

OpenAI, Chatbot’s creator, contracted to pay $12.50 an hour for the work, but the amount also had to cover pay for managers and psychological counselors that the workers had access to.

One group of Kenyan contractors is suing Facebook after their work to clean Meta’s AI required them to watch videos of beheadings, rapes, and suicides, among other acts. 

In June, a Kenyan judge ruled that Meta is legally responsible for the treatment of its contract workers, even though the workers are employed by an intermediary contract labor firm. 

The ruling is expected to transform working conditions for such workers in the future. The workers also have voted to unionize.

Cleaning an AI “is something that needs to get done,” Mark Sears, founder of CloudFactory, a firm that supplies laborers to clean chatbots, told the WSJ. “It’s just so unbelievably ugly.”

TRENDPOST: Developers are working hard to design self-cleaning AIs. Once proven, those algorithms will be hot commodities. Until then, humans’ mental health will be sacrificed on the altar of progress.

Skip to content