Inside the Harsh Reality Powering Generative AI’s Rise

Behind the sleek interfaces of generative AI systems lies a hidden workforce enduring low wages and distressing tasks. This unseen army of human data labellers—the keyword of this article—spends endless hours annotating images, texts, and videos to teach AI how to see, read, and respond like humans. Their invisible labor powers technologies like ChatGPT, self-driving cars, and automated content moderation systems, yet most remain underpaid and unprotected.
The Hidden Workforce Behind AI
For generative AI to function smoothly, people must first prepare and label massive datasets. In Kenya, 30-year-old Ephantus Kanyugi describes the grim side of this work. “You spend your whole day looking at dead bodies and crime scenes,” he said, recalling how he and colleagues had to outline wounds on crime scene photos to train AI systems that analyze medical or forensic data.
Kanyugi, vice president of Kenya’s 800-member Data Labelers Association (DLA), has been part of the industry since 2018. He and his peers plan to release a code of conduct this month to demand fair pay, better working conditions, and mental health support from major labeling platforms.
Ghost Workers of the Digital Age
Millions of workers worldwide quietly feed data to AI systems without legal protections or recognition. “We’re like ghosts. No one knows we exist,” said Venezuelan worker Oskarina Fuentes, who lives in Colombia. She juggles five data-labelling platforms, earning as little as five cents per task and rarely more than 25 cents.
Her labor, like that of countless others, enables AI to recognize pedestrians, generate realistic text, and detect harmful content. According to Grand View Research, the global data-labelling industry was worth $3.8 billion in 2024 and could surpass $17 billion by 2030.
Read: Battlefield 6 Breaks Records with 7 Million Sales in Five Days
A System Built on Human Suffering
The work, however, often exacts a steep human toll. Many workers face psychological distress from reviewing violent or disturbing content. Physical ailments like eye strain and back pain are common due to long working hours and lack of ergonomic support.
Kanyugi calls the conditions “modern slavery.” He says many laborers put in up to 20-hour days, only to earn a few dollars — and sometimes receive no payment at all.
Antonio Casilli, a sociologist at France’s Institut Polytechnique, found that most “click workers” are between 18 and 30 years old, often highly educated but trapped in low-wage countries. Despite their expertise, they earn a fraction of what AI engineers or developers make in the West.
Giants Behind the Curtain
Much of this labor occurs through layers of subcontracting. US-based company Scale AI is one of the biggest players, with clients including OpenAI, Microsoft, Meta, and even the US Department of Defense. Meta recently invested $14 billion in Scale AI and hired its co-founder Alexandr Wang to lead its AI division.
Scale’s subsidiary Outlier hires experts in specialized fields—like biology or coding—for $30 to $50 per hour. But another arm, Remotasks, pays as little as one cent per job, even when tasks take hours. Workers accuse these platforms of exploiting loopholes and hiding behind opaque policies.
Legal Battles and Mental Health Struggles
Scale AI is now facing several lawsuits in the United States. Workers allege unpaid wages, misclassification as contractors, and exposure to traumatic material. Some said they had to chat with AI bots about topics such as “how to murder someone” or “how to poison a person.”
While Scale claims to provide mental health resources and fair pay, workers argue these measures are insufficient. The company insists all employees are warned before engaging with sensitive content and can opt out anytime.
The Fight for Recognition
The DLA in Kenya is preparing legal action against Remotasks for cutting off hundreds of workers without paying them. Similar disputes have surfaced in Latin America and Southeast Asia. Many workers sign non-disclosure agreements that prevent them from identifying their employers or sharing grievances publicly.
Meanwhile, in the United States, 250 data annotators working for Google’s subcontractor GlobalLogic were fired after demanding better pay and benefits. Former employee Andrew Lauzon said the company “just wants docile annotators.” He had joined the Alphabet Workers Union to campaign for sick leave and affordable healthcare.
Growing Calls for Accountability
Advocates say it’s time for Big Tech to face scrutiny. A recent UNI Global Union study argued that “Silicon Valley cannot build the future on disposable labor.” Union leaders are urging governments to enforce labor standards for AI supply chains.
However, even Europe’s new AI regulations fail to mention “click workers.” French MEP Leila Chaibi noted that while EU laws require monitoring of human rights compliance, many member states are slow to enforce them.
The Road Ahead
Despite the hardships, demand for human annotators is only growing. AI systems still rely on people to refine outputs, verify facts, and judge context. “If you’re a carpenter or plumber, there’s a union and a minimum wage,” said 54-year-old Spanish worker Nacho Barros. “This job deserves the same recognition.”
The irony is striking: generative AI, celebrated for mimicking human intelligence, still depends on vast human effort. Yet the people teaching machines to think are left struggling for visibility, fair pay, and dignity.
The invisible workforce behind AI may remain hidden — but without them, the future of intelligent technology would simply not exist.
Follow us on Instagram, YouTube, Facebook,, X and TikTok for latest updates