Robert Bowers, the alleged Pittsburgh synagogue killer, had an online life like many thousands of anti-Semitic Americans. He had Twitter and Facebook accounts, and was an active user of Gab, a right-wing Twitter knock-off with a hands-off approach to policing speech. Among anti-Semitic conspiracy theories and slurs, Bowers had recently posted a picture of “a fiery oven like those used in Nazi concentration camps used to cremate Jews, writing the caption ‘Make Ovens 1488F Again,’” a white-supremacist reference.
Then, he made one last post, saying, “I’m going in,” and allegedly went to kill 11 people at the Tree of Life Synagogue in Pittsburgh.
Then, and only then, down came his accounts, just like Cesar Sayoc’s, the mail-bombing suspect. This is how it goes now.
In both cases, these guys made nasty, violent, prejudiced posts. Yet, as reporter after reporter noted, their online lives were—to the human eye at least—indistinguishable from the legions of other trolls who also say despicable things. There is just no telling who will stay in the comments section, and who will try to kill people in the real world.
In some corners of the internet, the tired old hypothetical of free speech has been turned on its head: There isn’t one person yelling “fire” in a crowded theater, but a theater full of people yelling “burn it down.” The pose of the alt-right is that they are only kidding about hating black people or Jews. A Bowers can easily hide among all the people just “shitposting” or trying to “trigger the libs.”
All of which complicates the situation for the big internet companies. Over the last 10 years, free speech has undergone a radical change in practice. Now, nearly all significant speech runs through a corporate platform—be it a large hosting provider, Wordpress, Facebook, or Twitter. Speech may be free by law, but attention is part of an economy. Every heinous crime linked to an app or website tests the fragile new understanding that tech companies have of their relationship to speech.
Tech company employees like to say things like, “Do you want Mark Zuckerberg being the arbiter of what speech is allowed on Facebook?,” as if that is not already the case, or that this is not exactly what Facebook signed up to do when it attempted to “rewire the way people spread and consume information,” as Zuckerberg put it in his letter to shareholders in 2012.
During the last couple of years, the big platforms like Facebook have come to understand that violent rhetoric is a danger to their businesses. Mark Zuckerberg has vowed to “take down threats of physical harm,” specifically relating to white supremacist violence.
In some areas, like terrorist posts, the companies take automatic and proactive steps to keep these ideas from reaching audiences. But, by and large, they’ve developed rules for judging content based on what users report to them.
On paper, these rules tend to sound pretty good. They are written by smart people who regularly encounter the problems of regulating the speech of billions of people, and who have thought hard about it.
But it seems that any time someone has a reason to look closely into the posts of individual users who turn violent, it is easy to see that there are all kinds of violent posts that seem to make it through the systems that Facebook, Twitter, and others have set up. Report anti-Semitism or rank racism or death threats or rape GIFs sent to women, and disturbingly often, things that seem like clear violations of a company’s policies will not be deemed that way by content moderators. Mistakes are made, no one knows how many, and it’s easy to blame the operations of these companies.
But the problem goes deeper. The trolls of the internet have developed a politics native to these platforms, which uses their fake democratic principles against them. “These fringe groups saw an opportunity in the gap between the platforms’ strained public dedication to discourse stewardship and their actual existence as profit-driven entities, free to do as they please,” John Herrman wrote in 2017.
Right wing critics of the platforms find themselves hamstrung because, generally speaking, they don’t want mandates for companies. As the Daily Caller put it, “Of course, Twitter is a private company, free to do as it pleases, even if it serves to please some and displease others.” There is no public platform; every place where speech circulates depends on private companies. So, calls of censorship, etc, are toothless unless people leave the platform (which they don’t in significant numbers) or advertisers pull their money (which they wouldn’t over the desires of neo-Nazis or anti-Semites).
The main pressure point, then, is to get the internet companies to conform to an absolutist free speech position, which many of these critics claim is more in line with the American conception of the principle. This also, for the trolls, aligns them with the side of democracy over the autocratic platforms. They become the heroes fighting oppression.
For many years, this politics worked. It was not long ago that free speech absolutism was the order of the day in Silicon Valley. In 2012, Twitter’s general manger in the UK described the company as “the free speech wing of the free speech party.”
But that was back before anti-Semitic attacks were spiking, before the Charlottesville killing, before the kind of open racism that had lost purchase in American culture made its ugly resurgence.
Each new incident ratchets up the pressure on technology companies to rid themselves of their trolls. But the culture they’ve created will not prove easy to stamp out.
from Technology | The Atlantic https://ift.tt/2ERw7j2