As a media behemoth with over 1.9 billion registered monthly users, YouTube is a staple of the internet. No video-sharing site has come close to matching the website’s scale, and its dominance is unlikely to shift anytime soon. A platform of this scale is obligated to ensure the content it publishes is safe and inclusive for all users, especially considering that its user base is growing.
Recent efforts by YouTube to manage content that perpetuates falsehoods or conspiracies that undermine well-established facts have not been effective enough, when efforts are even made at all. As reported in Bloomberg, under the leadership of CEO Susan Wojcicki, YouTube focuses more on user engagement to generate profits, as a result letting videos with false information proliferate and even jeopardize user safety.
Explicit content must be labeled as such by the uploader, meaning a publisher who fails to do so releases their video to hundreds of millions of people, many of them children. Content that shares false information is slow to be noticed by YouTube, if at all.
Perhaps a more human-centered final process would be helpful instead of one led by machine-learning algorithms that have proven to be ineffective. In order for the platform’s content problems to be solved, YouTube must do a better job at demonstrating that it’s taking these problems seriously.