un sito di notizie, fatto dai commentatori

OpenAI ha usato lavoratori kenioti pagati meno di 2$ l’ora per rendere ChatGPT meno tossica [EN]

OpenAI ha usato lavoratori kenioti pagati meno di 2$ l’ora per rendere ChatGPT meno tossica [EN]

0 commenti

Bill Perrigo sul Time descrive l’enorme lavoro di filtraggio e selezione di contenuti svolto per conto di OpenAI da lavoratori kenioti pagati meno di 2 dollari l’ora, assunti da Sama, una società di outsourcing di San Francisco.

OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

I dipendenti intervistati da TIME hanno descritto come «traumatizzante» il proprio lavoro, al punto da spingere Sama a interrompere la collaborazione con ChatGPT con otto mesi di anticipo.

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

Secondo i lavoratori intervistati, la decisione dell’azienda sarebbe stata motivata anche da un altro fattore.

Sama employees say they were given another reason for the cancellation of the contracts by their managers. On Feb. 14, TIME published a story titled Inside Facebook’s African Sweatshop. The investigation detailed how Sama employed content moderators for Facebook, whose jobs involved viewing images and videos of executions, rape and child abuse for as little as $1.50 per hour. Four Sama employees said they were told the investigation prompted the company’s decision to end its work for OpenAI. (Facebook says it requires its outsourcing partners to “provide industry-leading pay, benefits and support.”)


Commenta qui sotto e segui le linee guida del sito.