Scale AI is facing its third lawsuit in just over a month over alleged labor practices. In this case, the workers claim they suffered psychological trauma from viewing disturbing content without proper protection.
Scale, valued at $13.8 billion last year, relies on workers classified as contractors for tasks such as evaluating the responses of AI models.
Earlier this month, former workers filed suit alleging that they were effectively paid less than minimum wage and were misclassified as contractors. A complaint alleging similar issues was also filed in December 2024.
The latest complaint, filed in the Northern District of California on January 17, is a class action lawsuit that focuses on the psychological harm allegedly suffered by six people who worked on Scale's platform Outlier.
The plaintiffs allege that they were forced to write disturbing texts about violence and abuse, including child abuse, without receiving adequate psychological support, and that they suffered retaliation when they sought mental health counseling. They claim they were misled about the nature of the work when they were hired, and that the work has left them with mental health problems, including PTSD. In addition to new safety standards, they are seeking the creation of a medical monitoring program with unspecified damages and attorney fees.
One of the plaintiffs, Steve McKinney, is the lead plaintiff in a separate December 2024 complaint against Scales. The same law firm, Clarkson Law Firm of Malibu, California, is representing plaintiffs in both complaints.
The Clarkson law firm previously filed a class action lawsuit against OpenAI and Microsoft for allegedly using stolen data, but the lawsuit was dismissed by a district judge who criticized its length and content. Commenting on the incident, Scale AI spokesperson Joe Osborne criticized the Clarkson law firm and said Scale intends to “vigorously defend itself.”
“The Clarkson Law Firm has previously unsuccessfully pursued innovative technology companies and brought legal claims that were summarily dismissed in court. Federal Court Trials Officials found that one of their earlier complaints was “unnecessarily long” and contained “largely irrelevant, distracting, or redundant information,” Osborn told TechCrunch.
Osborn said Scale complies with all laws and regulations and has “numerous safeguards in place to protect contributors, including the ability to opt out at any time, advance notice of sensitive content, and access to health and wellness programs.” ”, he said. Osborn added that Scale does not take on projects that may include child sexual abuse material.
In response, Glenn Danas, a partner at Clarkson Law Firm, told TechCrunch that Scale AI “forces workers to view gruesome and violent content to train its AI models.” He said he was unable to ensure a safe workplace.
“We must hold big tech companies like Scale AI accountable, or workers will continue to be exploited to train this unregulated technology for profit.” “Yes,” Danas said.