It’s time we had a conversation about censorship.
Recently a mass exodus of major advertisers occurred at YouTube, which has since caused the ecosystem of that platform to fall into disarray. As noted by both YouTubers and mainstream media outlets alike, the precipitating event seems to have been a small number of government and corporate ads appearing alongside racist hate videos on a very small number of channels. The issue was brought to the attention of governments and corporations in a high profile manner, and from there, industry brass decided to pull all advertising off the YouTube platform, citing the desire to not be associated with harmful content.
As various media outlets have reported, it’s an odd narrative to follow given the fact this problem has existed for many, many years. Until the middle of 2016, it’s been an issue that’s rarely made the news. Furthermore, despite the historical efforts made by media companies (especially Google) to stamp out racist and other extremist content, the issue remains difficult to address owing to the sheer volume of data being uploaded at any given time.
In Youtube’s case, at least 300 hours of video is uploaded each minute (though some put that number as high as 400 hrs/min). If we go with the lowest estimate, that’s still 18,000 hours of video in an hour, 432,000 hours of video in a day, or 12.96 million hours in a 30-day month. These numbers are definitely not in Google’s favour, and despite valiant efforts to screen user-generated content, Internet media companies as a rule tend to be faced with a never-ending, uphill battle when it comes to managing these enormous volumes of user-generated content.
Similar to the ongoing situation at Facebook (and its implications for that network’s 1.2 billion daily users), the logistics are impossible when it comes to setting up a purely human intervention as a solution to harmful content. There’s no practical way for Google, or any ultra high volume media company for that matter, to retain sufficient human staffing in order to individually review each piece of user-generated content that comes in the door. As a result, industry standard practices include the use of software algorithms as gatekeepers and the automation of most issues related to policy enforcement and content management.