One especially sharp aphorism about the internet’s attention economy likens webpages to car crashes. Explaining to The New York Times in 2017 how likes and shares drive users toward extremes, Evan Williams, the co-creator of Twitter, Medium, and Blogger, remarked that news-feed algorithms, trained to serve us the most attention-grabbing content, will do so ruthlessly. Using the web, Williams said, is like driving down the road, seeing a car crash, then becoming momentarily fixated on the accident. Registering this, the algorithms that run the internet then serve you more car crashes, knowing only that the carnage supports their primary goal: getting and maintaining attention.
Look at the state of the web’s biggest platforms and it’s easy enough to see the wreckage. YouTube’s recommendation algorithms endanger children on the site. Googling the forest fires that are burning the Amazon faster than ever returns results for the Amazon Fire e-book reader. Apps that go viral thanks to Facebook and Instagram’s recommendation algorithms harbor surprisingly dark consequences for our data.
So now some of the very platforms that managed to scale massively by relying on the cheap labor of algorithms to sort and recommend content are slowly introducing an antidote to the “car crash” status quo: humans. Spotify, Google Play, YouTube Kids, LinkedIn, HBO, and Apple News have all begun to highlight human curation as a marketing tactic.
Earlier this week, Facebook announced it would be reintroducing human editors to curate News Tab, a dedicated news section. The company is currently pursuing partnerships that would allow it to embed full text articles from publications like The New York Times, The Wall Street Journal, and The Washington Post onto Facebook itself. According to a Wall Street Journal report, the talks are still ongoing.
[Read: How Facebook’s push into video cost hundreds of journalists their jobs]
Facebook has been unusually clear in emphasizing precisely what its humans will do and what its algorithms will do. Top stories and breaking news will be chosen by human editors, while the majority of the content will be served algorithmically, based on the data Facebook already has on you.
News Tab is Facebook’s chance to reboot its approach to news. In many ways, it can be seen as an act of atonement for the company’s attention-optimization strategy, as well as its now-defunct Trending Topics product. Reports of the News Tab feature came just as Facebook released the results of its anti-conservative bias audit, which polled 133 conservative lawmakers and groups about their concerns over how the platform treats right-wing content (and did not provide any evidence that such content is disadvantaged on Facebook). In 2016, a Gizmodo report alleged that Facebook told the human editors on the Trending Topics team to push down conservative news sources. Facebook responded at the time by saying it takes “allegations of bias” seriously and its editorial guidelines did not “permit the suppression of political perspectives [or] the prioritization of one viewpoint over another.”
The bias claims put Facebook in a difficult position. Over the years, CEO Mark Zuckerberg has touted algorithms as an effective means to deter hate speech and fake news. In terse meetings with Republican lawmakers, Zuckerberg has seemingly pivoted from the Trending Topics fiasco and deemphasized the role of humans in its algorithmic moderation systems. The underlying message is that human involvement could risk biases or subjectivity, so critical projects like filtering out hate speech must have at least minor human involvement.
“We need to rely on and build sophisticated AI tools that can help us flag certain content,” Zuckerbearg said at a congressional hearing last year. Faced with a dozen examples from committee members in which conservative content was taken down erroneously, Zuckerberg promised that the company would rely more on AI to provide answers. Noting that ISIS-related content is flagged by AI before human moderators even see it, he said, “I think we need to do more of that.”
[Read: Facebook users still don’t know how Facebook works]
For years, Facebook has selectively emphasized and de-emphasized humans’ roles in its algorithmic systems to its own advantage. News Tab emphasizes humans’ role as a means of avoiding serving popular but unverified content. By contrast, in discussing content moderation, Zuckerberg has emphasized the role of AI, perhaps to mitigate criticism about the effect this work can have on the people who do it. When ProPublica found that Facebook allowed employers to buy ads that targeted “jew hunters,” the social network pointed the finger at the algorithms it said weren’t programmed to filter hate speech. When Trending Topics surfaced viral videos of the ice bucket challenge, but not Black Lives Matter protesters, Facebook recruited humans in order to determine news “relevance.”
But the dichotomy between humans and algorithms has always been a false one. Humans train algorithmic systems, on human data. News Tab is an example of this process: Facebook told the Times it hopes algorithms will eventually pick up on how humans engage with the news. Facebook’s return to human curation still has an algorithmic bent, just not one it’s emphasizing to the press.
Trending Topics is another example. Yes, human editors chose which stories filled the sidebar, but those stories were selected from a pool of widely shared links at a time when the News Feed was specifically designed to prioritize viral content. The algorithms encouraged sharing popular news stories, and people shared what was served them, with the top stories garnering the “trending” designation. After Facebook fired the humans, algorithms took over until the product surfaced fake news stories and human moderators, fittingly, cleaned up after it. The humans and algorithms are highlighted or swept away as the scandal necessitates.
[Read: This is how much fact-checking is worth to Facebook ]
Emphasizing algorithms over human actors can be very useful cover. While critiquing algorithmic practices in her book If ... Then, the technology researcher Taina Bucher builds on the social theorist Linsey McGoey’s notion of “strategic ignorance,” when it’s useful for those in power to not know something. Rendering systems as wholly human or wholly algorithmic, Bucher writes, is a “knowledge alibi,” a deflection wherein technology is conveniently described as independent and complex, but human actors as inert and oblivious to consequences. To shirk full culpability, companies shift critics’ focus from how much agency any given algorithm has and toward how little human actors have—or how little we’re led to believe they have. Conservative anxiety around bias won’t be solved by deepening this human-vs.-algorithm fallacy. When people are afraid that algorithms are secretly biased, what they’re actually wary of is human bias at an industrial scale.
from Technology | The Atlantic https://ift.tt/2ZmGznC