Just This One Time, Tech’s Not the Main Problem

In Indian state after Indian state, this spring and summer, the stories of communal violence bore an eerie similarity. There’d be a rumor, sent from phone to phone (perhaps accompanied by a video) about some strangers stealing children, or harvesting organs, or slaughtering cows. Then, someone unlucky enough to hand chocolate to some children or be passing through a village would draw the attention of a crowd who’d heard the hearsay. The mobs attacked. Sometimes the outsiders lived. Often, they did not. Sometimes video of the attack would surface, bloody victims pleading for their lives, and that would drive a round of journalistic coverage. And in the dozens of cases that drew media attention, there was a common thread: WhatsApp. “When A Text Can Trigger Lynching: WhatsApp Struggles With Fake Messages” read one headline in India. In the U.S., the title of a Washington Post story was, “Forget Facebook and Twitter, fake news is even worse on WhatsApp — and it can be deadly.” The BBC intoned: “How WhatsApp helped turn an Indian village into a lynch mob.” The Indian government issued a statement castigating WhatsApp and its parent company, Facebook.

This year has been presented as an epidemic of violence, aided and abetted, even caused by, WhatsApp. The narrative slotted neatly into the broader discussion of Big Tech’s failures, the corrosiveness of social media, and the crises of misinformation across the world. After all, WhatsApp usage has exploded in India over the last few years, across city and country, rich and poor. Two hundred million Indians now use WhatsApp. Communal violence has been on the rise, going from 751 incidents resulting in 97 deaths in 2015 to 822 incidents and 111 deaths in 2017. Surely, one had something to do with the other, given all the reports of violence, not to mention troubles with vaccination misinformation and all manner of hoaxes.

But that’s where the grand narrative starts to break down. Extend the time horizon back further and the number of incidents was larger in 2013, 2009, and 2008, when communal violence peaked in India in the last decade. There’s no evidence that higher levels of communication technology penetration has led to higher levels of communal violence.

The nominal reasons for the recent violence do seem linked to the specific rumors that spread via WhatsApp, especially around child kidnapping and cow killing. But every serious report finds other deep-rooted factors, too. The local authorities may have incited one crime because some people owed them money. In several cases, the violence drew on religious tensions. In a third, the atavistic myth of the xopadhora, or child-lifters, touched off the attack. Across all of them, religious, class, gender, and local/outsider divisions come into play, as is a lack of faith in governmental authorities to fairly protect communities, leading to vigilante justice. And in some cases, it is government officials themselves who are using WhatsApp groups to mobilize the mobs: Prime Minister Narendra Modi’s Hindu nationalist Bharatiya Janata Party has been accused of stoking religious resentment and fostering Islamophobia.

WhatsApp may be a common factor in the reports of violence, but perhaps not in the way that people have intimated. As more people get smartphones and the ability to record video, pre-existing nasty behaviors now generate media that circulates. As with police brutality in the U.S., there may be more reports of violence in the media not because there is more violence, but because there is more video of that violence.

[Google and Facebook failed us]

“Hundreds of millions of people are exposed to inflammatory stories online. The vast majority of them are not stoning their neighbors to death,” said Judith Donath, the founder of the Sociable Media Group at the MIT Media Lab. “I think these stories are more about what is going on in that area that is making this sort of mob attack happen—yes, WhatsApp is facilitating the spread of information, but the mob is banding together locally.”

All of these alternative explanations for the violence are subtle. They don’t easily generalize to international audiences. What does generalize is WhatsApp, the big Western technology company, messing something up. In the climate of skepticism about technology companies, misinformation in Michigan might seem equivalent to misinformation in Maharashtra. But to focus on the technology is to obscure the civic and social work that needs to happen to protect people.

“It is very interesting to note that the developing countries that are the focus right now, due to WhatsApp being the vector of ‘deadly misinformation,’ share similar social and economic characteristics,”said Yaso Córdova, a researcher affiliated with Harvard’s Kennedy School, who has been studying the use of WhatsApp in Brazil. “[People in] Italy and Spain are heavy Whatsapp users too, but there is no conversation about whether WhatsApp incentivizes death in Italy or Spain. So, I am inclined to look at other aspects of the entire ecosystem, like social and economic aspects.”

In the U.S., where WhatsApp has relatively few users (20-some million), it would be easy to believe that the dynamics of WhatsApp are basically the same as exist on Facebook, Twitter, Instagram, or YouTube. If you don’t use WhatsApp, the differences between it and these other services might seem insignificant.

But for two services owned by the same social networking company, Facebook and WhatsApp could not be more different. Facebook is various levels of public; WhatsApp is private by default. Facebook is about “a global community”; WhatsApp is about those closest to you (90 percent of WhatsApp messages are between two users). Facebook runs one of the two most sophisticated advertising engines in the world; WhatsApp has no real business model (yet). Facebook mediates every single post anyone makes with an algorithm that weighs thousands of factors; WhatsApp is, more or less, a cheap drop-in replacement for text messaging. Facebook took off first in the U.S.; WhatsApp’s market share is highest in Saudi Arabia, Malaysia, Germany, Brazil, and Mexico. Facebook can see most messages people post; WhatsApp is encrypted end-to-end, making messages opaque even to the company itself. Facebook relies on quantified social feedback to drive usage of their product; WhatsApp just doesn’t have those mechanisms.

[The death of the public square]

Facebook, in recent years, has seemed like a potent, possibly perfect, tool for spreading viral information, true or not. Critics, myself included, have gone at Facebook for pretty much every feature that makes it different from WhatsApp. The problem with Facebook was the business model, or the user interface, or the distortions it induces in social systems, or the conception of community, or its erosion of privacy norms, or its rich-white-kid-U.S.-ness or its casino-like, dopamine-releasing operation. Critics imagined dozens of ways it could work better to slow or even quash the spread of bad information of all kinds. In a way, Facebook could even seem like a grand mistake, a cancer that disrupted the normal and happy growth of the internet.

WhatsApp, on the other hand, is basically an encrypted, rock-solid SMS. People use it because it works everywhere and for everyone. It has no tricks up its sleeve. If you were to declare that both WhatsApp and Facebook were a problem, you come uncomfortably close to admitting that mobile communications pose fundamental challenges to societies across the world.

Which ... there is a decent case for.

In 2007, there was a hotly contested election in Kenya. When the incumbent President Mwai Kibaki, a Kikuyu, was declared the winner, supporters of Raila Odinga, the Luo leader of the Orange Democratic Movement, erupted into violence, most notably in the Odinga stronghold of Kibera. Rumors flew around on mobile phone networks. Mobs formed along ethnic lines. “There, SMS use—group messaging and push—was very similar in form to what we’re seeing today with WhatsApp, not encrypted but for all intents and purposes private, because not monitored or contained by censorship or shutdown,” Ivan Sigal, the executive director of Global Voices, a media organization that supports, translates, and publishes reporting from across the world, told me.

Some texts explicitly encouraged violence. The phenomenon even got a name, “Black Propaganda SMS.”

Replace SMS with WhatsApp in analyses of the crisis in Kenya, and reporting from the time could read as if it was from 2018 (compare this piece on Sierra Leone’s election). “Mass SMS tools are remarkably useful for organizing this type of explicit, systematic, and publicly organized campaign of mob violence,” wrote the researchers Joshua Goldstein and Juliana Rotich, under the auspices of the Internet and Democracy Project at Harvard’s Berkman-Klein Center.

[The education of Mark Zuckerberg]

“SMS texts were used to circulate destabilizing rumors and hate messages, leading some observers to claim that the use of SMS to incite violence transformed the mobile phone from a communications tool to a ‘weapon of war,’” wrote the anthropologist Michelle Osborn, who has studied Kenya extensively.

But she approached the problem not from the technological end, but as an anthropologist studying rumor. SMS was only one network among many that pushed and pulled information from the local storehouse of possibly useful, possibly true information. The majority of harmful rumors, she found,  “were amassed through conversation, interviews, and occasional moments of eavesdropping although these were also disseminated through other communication technologies,” Osborn wrote.

Slowing violence-inducing rumors may be more difficult than simply requesting WhatsApp to change. The company has made tweaks to its service in India—for example, users used to be able to forward messages to 250 people; in India, that will be reduced to just 5. WhatsApp has also removed the little arrow on images, which allowed them to be easily forwarded, to subtly discourage forwarding.

What else can or should WhatsApp do? Several scholars called attention to the other uses of the platform by political organizers and just average people. Making it more difficult for bad information to spread also makes it more difficult for good information to spread. And, the big thing the Indian government has requested—that WhatsApp strip end-to-end encryption, allowing them to see who is spreading misinformation—would mean turning user data over to, in India, a right-wing nationalist government, and broadly, to governments with varying track records on human rights.

“I’ve seen so many positive uses of private messaging apps that it’s difficult for me to maintain an apocalyptic view,” said An Xiao Mina, a technologist and author of the forthcoming book, Memes to Movements, and who has contributed to The Atlantic. “We have to look at the entire context in which technologies are being used, and what the presence of tech does or does not exacerbate.”

It could be that centering the technology in these cases obscures more important solutions in the civic, governmental, or social realms. WhatsApp might be successful, and therefore heavily used now, but its basic utility is duplicated by several other services. “If it weren’t WhatsApp, it would be Telegram. If it weren’t Telegram, it’d be iMessage. If it weren’t iMessage, it would be Signal,” Riana Pfefferkorn, a cryptography fellow at the Stanford Center for Internet and Society, told a Brazilian publication. “Fake news is spreading, and it would spread whether it was any other communications app that people want to use.”

If the problems of technologically mediated communications extend beyond WhatsApp, then the solutions will have to do so, as well. Sigal, of Global Voices, had a short list of ideas. “Watchdog projects” that figure out what the precursors to violence look like, improved governance, as well as looking historically at where violence has occurred “because past is often prologue when it comes to communal, ethnic, and political violence.”

“All these ideas about building peaceful societies are difficult, slow, and unglamorous,” Sigal said. “In other words, maybe not about primarily about scale, and maybe not primarily about technology.”



from Technology | The Atlantic https://ift.tt/2IeLaBA