Content Moderation And Labour Rights
Every day, Njoroge logs into his work computer and prepares himself for another eight-hour shift. His job is to sift through hundreds of posts on a popular social media platform—videos of graphic violence, hate speech, sexual abuse, and suicide attempts – and decide, within seconds, whether each post should be removed or allowed to remain online. It is emotionally draining work, but someone has to do it.
Behind the seamless experience users enjoy online lies a hidden workforce like Njoroge — thousands of human moderators, often in the Global South, tasked with scrubbing the internet clean. While algorithms help filter blatant violations, it is people, not machines, who bear the emotional and psychological weight of keeping digital platforms safe. Yet these workers remain largely invisible, underpaid, and unprotected, raising urgent questions about labour rights in the digital age.
In light of the foregoing, this article analyses the legal challenges surrounding labour protections for content moderators to highlight the challenges they encounter and identify best practices to address these issues effectively.
CONTENT MODERATION
Content moderation is the process of reviewing user-generated content on online platforms. Such content may include texts, images, videos, and live streams. Many major digital platforms and industries rely heavily on content moderation to maintain safe, positive, and legally compliant user environments, including social media applications (e.g. X formerly known as Twitter, Facebook, TikTok, or YouTube), gaming platforms (e.g. Discord or Twitch), public forums (e.g. Reddit), and e-commerce sites (e.g. Amazon or eBay)
Moderation may be classified according to timing:
- Pre-moderation: This is where content is reviewed and approved by a moderator before it is made public. While this provides higher control over user-generated content, it can slow down content delivery.
- Post-moderation: This is where content is published first and then reviewed by moderators after it has gone live (e.g. TikTok and Instagram). If the content is found to violate community guidelines or platform policies, it is removed or otherwise managed post-publication. This allows for faster content sharing and real-time engagement, but risks harmful content appearing before removal.
- Reactive moderation: This is where moderators respond to content that has been published and reported by users (e.g. Facebook). This is a cost-efficient method, but it can delay the removal of harmful content.
- Distributed moderation: The users are given the power to review, vote on, or report to determine whether it should be allowed to remain on the platform or be removed (e.g. Quora and Reddit). Although this approach empowers users, it can lead to inconsistency in moderation decisions due to subjectivity.
There are three methods of moderation:
- Human content moderation (or manual moderation): This involves real people who monitor, review and evaluate user-generated content submitted to a platform.
- Automated content moderation: This involves the use of algorithmic tools and pre-programmed rules to automatically review user content based on keywords, patterns, and content attributes.
- Automated moderation: This is a sophisticated form of automated moderation that involves the use of artificial of intelligence (AI) to review user-generated content on digital platforms without direct human intervention.
Typically, content moderators are tasked with:
- Reviewing user-generated content: Moderators screen content posted by users to check for compliance with the platform’s community guidelines, terms of service, and applicable laws and regulations. Any item that does not meet a platform’s pre-set guidelines is marked and taken down.
- Enforcing platform policies: Moderators identify and remove content that violates rules, such as hate speech, harassment, explicit material, and misinformation.
- Handling user reports and appeals: Moderators respond to complaints raised by users about inappropriate content, providing explanations or reversing decisions where justified.
LABOUR RIGHTS
Labour rights are fundamental human rights that ensure fair and safe working conditions. Their importance lies in creating a dignified, just, and productive work environment. According to the International Labour Organization (ILO), the fundamental principles and rights at work are:
- Freedom of association and the effective recognition of the right to collective bargaining
- The elimination of all forms of forced or compulsory labour
- The effective abolition of child labour
- The elimination of discrimination in respect of employment and occupation
- A safe and healthy working environment
In Kenya, the key frameworks that govern labour rights are:
- Constitution of Kenya, 2010: Article 41 of the Constitution guarantees the right to fair labour practices, including the right to fair remuneration, the right to reasonable working conditions, the right to form, join and participate in trade unions, and the right to go on strike.
- Employment Act, 2007: The Employment Act outlines the minimum conditions of employment, such as fair remuneration, working hours, annual leave, sick leave, maternity and paternity leave, housing, water, food, and medical attention.
- ILO Conventions: Kenya has ratified various ILO Conventions, such as the Forced Labour Convention, 1930, Right to Organise and Collective Bargaining Convention, 1949 (No. 98), Equal Remuneration Convention, 1951 (No. 100), Abolition of Forced Labour Convention, 1957 (No. 105), and Worst Forms of Child Labour Convention, 1999 (No. 182).
- United Nations Guiding Principles on Business and Human Rights: This global framework aims to establish best practices for states to ensure that human rights are respected throughout business activities. In June 2011, Kenya unanimously endorsed the UN Guiding Principles on Business and Human Rights and taken steps to implement them in its National Action Plan of November 2022.
LABOUR RIGHTS VIOLATIONS FACED BY CONTENT MODERATORS
In today’s digital world, many companies are turning to outsourcing and online platforms to hire workers, especially for content moderation. Most content moderators are not direct employees, but hired through third-party outsourcing firms based in countries like the Philippines, India, or Kenya. While this allows platform companies such as Meta, TikTok and YouTube to cut costs and limit their liability, this employment arrangement has significant consequences for the labour rights of content moderators.
By classifying content moderators as independent contractors rather than employees, platform companies may evade fulfilling core employment obligations (e.g. providing minimum wage, paid leave, or health insurance). This is because the work is performed in other countries with different and, often, weaker legal systems.
Consequently, this exposes content moderators to precarious working conditions as they typically receive lower wages, little to no benefits, minimal job security, and reduced access to mental health support, despite the psychologically taxing nature of their work. The classification of content moderators as non-employees also makes it more difficult to unionize, advocate for better working conditions, leaving many without the protection typically available to employees.
In 2025, Equidem documented the following legal violations experienced by content moderators:
- Unreasonable targets leading to high stress and extended working hours
- Threats of termination for failure to meet targets
- Extended hours and overwork
- Forced unpaid overtime
- Team performance targets leading to workplace bullying
- Physical impacts of exposure to violent content as a feature of work processes
- Denial of leave and sick leave
- Ongoing exposure to violent and sexually explicit content as a feature of work processes
- Stress and anxiety from decision fatigue as a feature of work processes
- Verbal abuse and workplace bullying
- Exposure to pornographic and violent sexual content resulting in sexual trauma as a feature of work processes
- Sexual harassment
- Failure to provide occupational health services, including psychological report
- Unpaid overtime
- Task-based low wages
- Retaliation for organising a union
This issue has led to various lawsuits both nationally and internationally, including a landmark 2018 case where former Facebook content moderators in Kenya sued Meta after being diagnosed with severe post-traumatic stress disorder caused by exposure to horrific content such as extreme sexual deviancy, bestiality, child abuse, torture, dismemberment, and murder while moderating the platform.
CORPORATE RESPONSIBILITY AND BEST PRACTICES
Platform companies have a fundamental responsibility to uphold and protect the rights of content moderators. To fulfil this duty, they must implement measures that create a safer and supportive work environment for content moderators including:
- Providing comprehensive mental health support to help moderators cope with the psychological effects of viewing harmful or disturbing content.
- Ensuring fair labour practices by offering fair compensation, clear job descriptions that outline the nature of the work, and the freedom to join trade unions.
- Setting clear guidelines on the types of content moderators are expected to review.
- Establishing open and secure communication channels that allow moderators to raise concerns and report issues without fear of reprisal.
Law firms can play a vital role in supporting corporations to uphold and protect the rights of content moderators. Here at Kioi & Co. Advocates, our expertise spans across multiple areas, including labour law, contract law, and corporate governance. We can assist corporations in the following ways:
- Drafting or reviewing employment contracts, contractor agreements, and internal corporate policies to ensure compliance with labour laws.
- Providing legal advice on various labour laws that affect the rights of content moderators as well as regulatory requirements related to workplace safety, mental health support, wages, and benefits.
- Managing litigation and dispute resolution involving content moderators.
Please feel free to contact us at info@kioi.co.ke or book a consultation with any of our Associates for this or any other related legal matters.
