Miles Standish
British Wire Exclusive
The global race to build smarter AI relies on a hidden workforce: millions of workers in Kenya, the Philippines, and India who label data, moderate content, and train algorithms for pennies. These workers toil in exploitative conditions, subjected to psychological trauma, low wages, and job insecurity. The concern no one is raising is not just the ethical lapse – it is the structural failure of a system that externalises costs onto vulnerable populations while tech giants reap billions.
At the heart of this labour chain is a phenomenon known as “ghost work.” Companies like OpenAI, Google, and Meta subcontract data annotation and content moderation to third-party vendors such as Sama, Appen, and Lionbridge. These vendors hire workers in low-income countries, often through digital labour platforms, paying piece-rate wages that equate to less than $2 per hour. A single labelling task – drawing boxes around objects in images or transcribing audio – pays a few cents. Workers must complete thousands of tasks daily to earn a meagre living.
The psychological toll is severe. Content moderators who review graphic violence, child abuse, and hate speech for AI training often develop PTSD. In Nairobi, former Sama employees reported that they were given minimal therapy and pressured to meet quotas. One worker told The British Wire: “You survive by blocking it out. But then you see it at night.” Sama, which rebranded after a scandal over its use of Kenyan workers for OpenAI, insists it provides mental health support. But workers say the support is insufficient, and the pay fails to compensate for the harm.
Beyond trauma, the system traps workers in a cycle of precarious employment. Most are classified as independent contractors, denied benefits, sick leave, or overtime. Platforms like Amazon Mechanical Turk and Appen treat them as “flexible” labour, but this flexibility is a one-way street: workers can be deactivated without notice. When OpenAI partnered with Sama to filter violent content for ChatGPT, workers were suddenly laid off after the contract ended, with no severance. The tech giant declined to comment on its vetting of labour standards.
The structural failure lies in the lack of regulation. The UK’s Online Safety Bill and the EU’s AI Act mainly address algorithmic harms, not labour abuses in data supply chains. Tech companies argue that outsourcing creates jobs in developing economies. But this is a smokescreen. The jobs are low-skilled, dead-end, and often harmful. In the Philippines, workers label explicit images for AI moderation, earning $1.50 per hour while dealing with perpetual harassment from management. The country’s Department of Labour has investigated but struggles to enforce standards as much of the work is remote.
The hidden cost is also to AI quality. Exploited workers are prone to errors, biases, and burnout. Studies show that poorly paid annotators may speed through tasks, introducing inaccuracies that perpetuate biases in AI models. The very systems we trust for medical diagnosis, hiring, and criminal justice are built on corrupted data. Yet the industry has no incentive to improve conditions because the labour supply is endless. A 2023 report by the International Labour Organization noted that the global data annotation market is expected to reach $13.7 billion by 2030, but workers will see little of that.
What can be done? In the absence of regulation, some companies have adopted “fair work” standards. CrowdStrike and Hive have introduced minimum wages and benefits. But these are exceptions. Most vendors compete on price, driving down wages. The British Wire obtained internal documents from a major vendor showing that they reject workers who ask for raises, replacing them with new recruits from cheaper regions.
The ultimate arbiters are consumers. A recent poll found that 72% of Britons believe AI companies should audit labour chains. But few act on this. The tech giants maintain plausible deniability by contracting with vendors. When pressed, a Google spokesperson said, “We require our suppliers to comply with our Code of Conduct.” Yet the code lacks enforcement mechanisms. Workers rarely complain for fear of losing income.
As AI becomes ubiquitous, the ghost workers remain invisible. Britain imports their labour via digital platforms, but ignores their plight. The real scandal is not that exploitation exists – it is that we have designed an industry whose profits depend on it.








