Google admits it can’t cope with filtering 300 hours of YouTube uploads every minute

Google admits it can’t cope with filtering 300 hours of YouTube uploads every minute

By Graham Templeton Jan. 29, 2015 12:28 pm youtube terror head

Back in the day, it was possible to think of online video regulations as frivolous things. “Oh no,” you might have thought to yourself, “Somebody saw a boob, or heard a swear. Big whoop.” Indeed, when it came to moderation of content on YouTube, it was the comments that always seemed like the most pressing problem. These days, though, nobody can afford to ignore the power that well-made video has in modern Western culture — especially if you want to change that culture dramatically. From aggressively white supremacist organizations to primitive murder cults at the edges of the Arabian Desert, there is basically nobody left in the world who doesn’t understand the utility of online video — and for the world’s biggest provider of online video, that’s presenting some very real problems.
Google recently appeared before a European parliamentary meeting on counter-terrorism where the company’s Public Policy Manager Verity Harding said that it’s simply impossible to screen all the video being uploaded to YouTube’s servers. She said that receives about 300 hours of content per minute — put differently, users upload more than two years of data to YouTube every hour. To pre-screen all this content, Google argues, would be like trying to pre-screen phone calls before connecting them — a deliberately loaded comparison that is actually a bit incoherent, since of course phone calls aren’t being submitted in their entirety before they begin, as videos are before they go live. The point remains, though: Google cannot handle the sheer success of their own system, and the conventional wisdom these days is that if Google can’t handle something, nobody can.

Interestingly, the discussion did not center around demonizing Google for facilitating terror, but around how governments and other bodies can help Google to deal with this problem. Security agencies, ISPs, service providers, and data consumers themselves will be needed if we want to effectively filter certain types of content off of even one video website, albeit the largest. If agencies or corporations have any data that could be useful in predicting or preventing successful terror-related video uploads, then they need to start developing a way of sharing that info with Google, it make its job at least a little bit possible.

Right now, YouTube relies on the same mechanism for content filtering as Gmail: user flagging. The problem is that bad emails by definition come to people and bother them, incentivizing them to take part in the flagging and create a healthy ecosystem. Horrifying videos, however, are often sought out by specifically the people who want to see them, and they’re certainly not going to help you out with flagging. User flagging does work to get this sort of content off of YouTube, but not nearly as fast as many believe necessary.

This is a bigger problem for YouTube than it might at first seem to be, especially to people who remember cyclical historical outrage over saucy online videos. On some level the only reason YouTube’s past failures in content moderation have gone relatively unnoticed is that few people really cared all that much. Today we’re not facing sex, foul language, or copyright violation, but terrorist snuff films and recruiting material for groups actively shooting at Western military forces. It’s one thing for Google to weather the wrath of uptight mothers-of-six, and quite another to have to deal with justified public backlash and plenty of unwanted attention from the US security establishment. When some in the increasingly-dystopian UK think they might be able to criminalize even looking at such videos, even a behemoth entity like Google has to take stock of the risks they’re running.

Of course, there is always the danger that implementing measures to control this sort of content will be abused in the future to limit some other form of content we like more. However, I think the example of child porn shows that cooperation for a goal can be taken to a necessary extreme and no further (as seen with the war on child porn in the US). The danger is always there, though, that precedents will be applied far beyond the necessary (as seen with the dizzying expansion of what was once a war on child porn in the UK). If it stops outright terrorist propaganda, or violent racism, most people won’t mind the censorship — but where does racism stop and extreme opinion on immigration begin? Where precisely is the border between religious freedom and misogyny? Differing opinion on the answers to these questions is why it’s always tough to attack these problems in a multi-disciplinary way; it takes something as heinous as child porn or global terrorism to align all the many conflicting interests.
In classic internet fashion, a win by Google here would simply lead terror videos to be hosted on smaller and more exotic video services, or crypto-European domains; it won’t have all the power of a viral YouTube hit, but it will work for them. This isn’t about scouring the whole internet for terrorist content, but about assisting a private corporation to enforce some standards for itself. Back in the day, that would have taken the form of punitive damages to stop Google from doing more business than it can usefully oversee. Today, with online innovation being so widely supported, the strategy is to try and pitch in.

Leave a Reply