TikTok will start automating content moderation on the app in place of humans, to speed up the process.
The short-video making app said that it would use automated systems to smoke out posts that contain sex, violence, nudity, graphic content, illegal activities and posts that violate the safety policy for minors. Once the system detects such videos, they will be taken down instantly, and the creator can then appeal to a human moderator.
The process is also expected to help prevent harm to human moderators who are required to watch through multiple distressing videos. With the automation in place they can spend time on clips that are tricky to review maybe due to context.
Read: TikTok Reportedly Testing a Job Recruitment Feature
Last year, a number of moderators from Facebook sounded an alarm over the impact of watching distressing content on the app. A number of them are said to have developed PTSD and trauma as a result.
So far, automated moderation has been tested in a number of countries including Brazil and Pakistan. Although not a perfect process, TikTok is counting on the automation to make moderation transparent.
The company has been dogged with bias and racism in the past, and the automated process might not be 100 percent. However, the company believes that with human moderators handling appeals, they will be able to address such issues.
Automated moderation is being rolled out in the US first.
Email your news TIPS to Editor@kahawatungu.com or WhatsApp +254707482874. You can also find us on Telegram through www.t.me/kahawatungu
Email your news TIPS to Editor@Kahawatungu.com — this is our only official communication channel
