According to a Time investigation, OpenAI, the company behind ChatGPT, paid Kenyan workers less than $2 per hour to filter through tens of thousands of lines of text to help make its chatbot more user-friendly.
OpenAI confirmed that Kenyan workers assisted in the development of a tool that tags problematic content. Sama, a San Francisco-based firm that employs workers in Kenya, Uganda, and India to label data for Silicon Valley clients such as Google, Meta, and Microsoft, was OpenAI’s outsourcing partner in Kenya. Sama bills itself as a “ethical AI” company that has helped more than 50,000 people escape poverty.
The data labelers hired by Sama on behalf of OpenAI were paid between $1.32 and $2 per hour, depending on seniority and performance.
Despite their critical role in the development of ChatGPT, the workers faced difficult working conditions and low pay. According to TIME, one Kenyan worker who was in charge of reading and labeling text for OpenAI “suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child.”
Workers were forced to read graphic details of NSFW (not safe for work) content such as child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest while labeling and filtering toxic data from ChatGPT’s training dataset.
An OpenAI spokesperson confirmed in a statement that Sama employees in Kenya helped develop a tool to detect toxic content, which was eventually integrated into ChatGPT. According to the statement, this work contributed to efforts to remove toxic data from training datasets used by tools such as ChatGPT.
The sources for this piece include an article in TIME.