Splicetoday

Digital
Jun 16, 2017, 10:22AM

YouTube's Failed Experiment

Swatting, bad algorithms, and more. 

9935521594 658c69669f b.jpg?ixlib=rails 2.1

 One of my favorite YouTubers is a genderqueer hedonist who calls himself ContraPoints, and his videos on heavy topics strike a magical balance between entertainment and information. The bodacious costumes and outright silliness give way to thoroughly researched and thoughtful commentary. A few weeks ago, I was surprised to discover that YouTube had removed his video entitled “Does the Left Hate Free Speech? (Part 1).” There was nothing wrong with it. Nothing obscene, no depictions or encouragement of violence. The video, later uploaded to Vidme, is an analysis of the ideals, and occasional hypocrisies, of Dave Rubin and Christopher Hitchens, among others.

What’s especially surprising is that the video was flagged as “violating YouTube’s policy on spam, deceptive practices, and scams.” As ContraPoints explains in a follow-up video: “Nothing with a heartbeat seems to be involved at any stage of the flagging and removal process, it’s just computers all the way through. So anyone could get any video removed for any reason if they get enough people to go along with a flagging campaign.” There’s no way of knowing who flags a video, there’s no official number given for how many times a video needs to be flagged before it’s subject to review and removal. In speaking with ContraPoints and a few other YouTubers, there appears to be a consistent problem of a lack of human involvement on YouTube’s end, and that’s led to unscrupulous users abusing the system. Even ContraPoints’ attempt at an appeal was just as impersonal, with the responses being company form letters.

Another controversy is how YouTube monetizes videos. This stems from advertiser concern over their ads showing up alongside hateful or extreme content. As advertisers exerted their power, YouTube’s algorithms became stricter about the videos, as The New York Times explained, “To rein in its sprawling video empire—400 hours of video are uploaded every minute—YouTube uses machine learning systems that can’t always discern context, or distinguish commentary or humor from hate speech.” So it creates an atmosphere where an academic conversation about hate speech gets treated like actual hate speech.

One of the things I’ve learned is that there are bullying channels devoted to finding ways to hassle other YouTubers. What this means is that someone with a substantial enough following (sometimes hundreds of thousands of followers) will make a video rebuttal to a user with a minuscule following (barely a few hundred), which unleashes a mob to leave malicious comments and downvote a video. One of the tell-tale signs is when there are more downvotes or comments than subscribers, particularly when the number of views isn’t much higher than the number of downvotes. Depending on how persistent and threatening these mobs can be, it has led to incidents like ContraPoints having his video removed. This level of bullying has resulted in people terminating their channels and quitting YouTube.

Most distressing is that children who’ve made videos are targeted for abuse by adults, or the children of video makers are subject to violent threats. I also spoke with more than one user who’d either been swatted or faced a threat of swatting. Unfortunately, YouTube’s global reach means that perpetrators could be beyond the reach of local law.

In speaking with long time YouTuber Philip Rose (who has posted as the monikers True Pookah and Liberal Ogre), he described YouTube’s terms of service as expendable in the company’s favor: “That's the beauty of the TOS. It's a legal document that leaves YouTube with great latitude as to how they interpret it. If you really read it, you also see that they are under no obligation to respond in any one way to possible breaking of the TOS. Response (if and how) is all up to them. Even when they added in the language about cyberbullying, it wasn't actually anything new. That concept was already embedded in previous versions of the TOS. The only actual changes that have ever really taken place, is how YouTube has decided to act based on what they felt was important at the moment, and that has always been consistent with their one goal; to retrain the YouTube audience.”

One of the more difficult aspects of this is the reality of small creators intimidated by the corporate behemoth, especially considering that YouTube is a subsidiary of Google. While some argue this is a new version of the eternal rift between artist and benefactor, the consolidation of media to just a few outlets is bad for the marketplace of ideas. ContraPoints argues for what he calls the broad view of free speech, that it’s not enough to enshrine it into law, but to also protect that which he describes as marginal and non-conformist. Those are exactly the types of speech that those in the benefactor role have rarely been interested in protecting.

Historically, it’s usually publishers outside the mainstream, like Barney Rosset, Lawrence Ferlinghetti, or Larry Flynt, who’ve expanded the parameters of free speech, but those fights were in the courts. The court of advertiser opinion is a different beast, and an inherently less permissive one. Because ads are designed to sell a product through a prism of fantasy, an algorithm that treats academic speech the same as hate speech is a win because it means the viewer isn’t associating their product with anything that breaks the fantasy. Google is a big enough company that it should be able to hire actual humans for its review process. Their choice not to do so tells us what their priorities really are.

Discussion
  • Involving humans would be cost-ineffective and add a layer of intrinsic bias. Algorithms are impersonal. If it's any consolation, the Alt Right is up against the same issues. I think that's a good sign.

    Responses to this comment
  • If the algorithm inhibits free speech and wide debate it either has to be improved or a human element added. By not being able to detect the differences between an academic discussion about hate speech and actual hate speech, the algorithm has an inherent bias worse than any human bias. Why? The algorithm, in that case, is both blind and ignorant yet has the power to censor. Google can afford to pay a few dozen students to do this

    Responses to this comment
  • YouTube receives an average of 300 hours of content per minute.

    Responses to this comment
  • Let me know in 12 months how YouTube's failure works out.

    Responses to this comment
  • You trust human beings too much.

    Responses to this comment

Register or Login to leave a comment