YouTube removing Elliot Rodger's videos shows how we're delegating "free speech vs unsuitable content" decisions to corporations

Features, Internet, Round ups
Share

On Saturday in California, Elliot Rodger shot and killed six people before killing himself. It was a horrible tragedy, and like with many shootings of this nature, videos later emerged in which he explained why he was going to do what he did.

youtuberemoved.png

What makes this unusual is that rather than post his video to a TV station, like the VirginiaTech murderer did, or email it to certain people (as Norweigian mass murderer Anders Behring Breivik did with his ‘manifesto’), he posted it to YouTube.

The BBC has since reported that YouTube has removed the videos, citing unspecified violations in policy.

In other words, YouTube (or Google) has decided for us that the content in the videos is so objectionable it shouldn’t be viewed on its platform. Whilst the video will no-doubt pop-up again on other websites, the one place it won’t appear is on YouTube, the world’s largest video sharing platform.

Facebook has also played a role in this tragic story. The same BBC piece reports that the company has removed a group glorifying Rodger – like YouTube, citing a violation of its guidelines as the reason for the removal.

And if you’ve been paying attention, you’ll know this is a long way from the first time that big technology firms have had to make decisions on content – and sometimes these decisions have proved controversial, as in the case of the “Women Who Eat On Tubes” group.

What I find odd to think about is how we’ve delegated the role of content arbiter over to these mega corporations. And no one seems to have noticed.

On the surface of it, removing videos posted by a spree killer seems like a no-brainer. Who wants to see that? But should we, as adults, really have someone else decide who can see certain controversial material? Why are we happy having that decision delegated to a corporation to make that decision? (Besides, isn’t it important that we learn from history – surely we all agree that distressing footage of the horrors of war not be censored?).

In the case of the controversial internet porn filters, we’ve already seen internet service providers acting like a nanny, and knowing what is best – in this case apparently attempting to decide what is best for an adult to view, even after he has asked for the filter to be removed.

In all of these cases, the companies involved have been able to hide behind company policy: YouTube has “Community guidelines“, Facebook has “Community Standards” and so on – but I wonder if things are more fundamental than that, and if it tells us something about the evolving relationship between technology and free expression?

Whilst we can pretend that the internet is a completely open platform – where, if you’re not able to publish what you want to on one platform you can simply find others, the reality is rather different. We live our lives online now, but we also live our lives on Facebook, on Twitter and on YouTube (and a handful of other major service providers). If you’re not able to participate in Facebook, and you’re of my generation, then you risk being left out of the lives of your friends as they forget you exist. No one is going to go to specifically read my blog when everyone else’s status updates are posted direct on to the news feed. Whilst glorifying a murderer might seem pretty clearly beyond the pale, what about a groups supporting either side of the Israeli/Palestinian conflict that might feel compelled to use intemperate language? What if there was a robust discussion of religion, or sexuality, or politics? As we know, it is going to be impossible for everyone to agree where to draw the metaphorical line.

These hegemonic internet powers are now the arbiters of what is allowed to be discussed – which is surely a free speech issue as they are the ones drawing the line for a large proportion of online expression?

Whilst with traditional broadcast TV there has always been an editorial process in which it is decided whether certain unpleasant images can be shown to the public, the difficulty now is that not only are the sources of content changing (anyone can upload a video), but the platforms in which we consume it have changed to.

YouTube, Facebook et al are not only the BBC of old deciding that something is too horrible for the 6 o’clock news, but the compares are equally playing the role of a dystopian British Telecom, interrupting your phone call to tell you that you can’t discuss that particular topic with your friends.

Is it right that corporations hold the power to make these decisions like this? And is it surprising that we seem to have unthinkingly delegated this responsibility to them. There’s obviously no easy answers to this intersection of free expression and protecting sensibilities – but it is one that requires greater thought than simply accepting “community guidelines” as a final answer as to why a decision to remove or censor a piece of content has been made.

James O’Malley
For latest tech stories go to TechDigest.tv

One thought on “YouTube removing Elliot Rodger's videos shows how we're delegating "free speech vs unsuitable content" decisions to corporations

  • it seems to be a good research .Toronto Pearson Airport TaxiToronto Airport TaxiPearson Airport Taxi

Comments are closed.