A Forbes report raises questions about how TikTok’s moderation team handles child sexual abuse material — allegedly restricting wide, insecure access to illegal photos and videos.
Employees of a third-party moderation outfit called Teleperformance, which works with TikTok, among others, claim it has asked them to review a disturbing spreadsheet called DRR or Daily Required Reading on TikTok moderation standards. The spreadsheet allegedly contained content that violated TikTok’s guidelines, including “hundreds of images” of children being nude or being abused. The employees say hundreds of people at TikTok and Teleperformance have access to content from both inside and outside the office — opening the door to a wider leak.
Teleperformance Denied Forbes that it showed employees sexually exploitative content, and TikTok said the training materials “have strict access controls and do not include visual samples of CSAM,” although it did not confirm that all third-party providers met that standard.
The employees tell a different story, and if Forbes explains, it’s a legal predicament. Content moderators are routinely forced to deal with CSAM posted on many social media platforms. But depictions of child abuse are illegal in the US and must be handled with care. Companies are expected to report the content to the National Center for Missing and Exploited Children (NCMEC), then keep it for 90 days, but minimize the number of people who see the content.
The allegations here go way beyond that limit. They state that Teleperformance showed employees graphic photos and videos as examples of what to tag on TikTok, while playing fast and loose with access to that content. An employee says she contacted the FBI to ask if the practice amounted to criminally distributing CSAM, though it’s not clear if one has been opened.
The complete Forbes report is worth reading, which outlines a situation where moderators couldn’t keep up with TikTok’s explosive growth and were told to watch crimes against children for reasons they felt were wrong. Even by the complicated standards of children’s online safety debates, it’s a strange — and if accurate, horrifying — situation.