More Privacy Still Isn’t Privacy

The best measure of whether a company cares about privacy is what it does by default. Many apps and products are initially set up to be public: Instagram accounts are open to everyone until you lock them, and the whole world can see who you split utilities with until you take your Venmo private. Even when companies announce convenient shortcuts for enhancing security, their products can never become truly private. Strangers may not be able to see your selfies, but there’s no way for users to untether themselves from the larger ad-targeting ecosystem.

That lack of definitive privacy has come to the fore over the past few weeks as Google and Amazon each announced shortcuts to mitigate “always on” tracking—if users choose to enable them. Amazon Echo users can clear 24 hours of stored voice commands by saying, “Alexa, delete what I said today.” Google announced at its I/O meeting in early May that Google Maps will eventually feature an “incognito mode” that turns off tracking. While it’s enabled, users’ locations won’t be added to their profiles or used to inform ad data. (Google declined to comment for this article; Amazon confirmed the details of its history-clearing feature but did not elaborate further.)

While the packaging for these features is shiny and new, the substance behind them is not. Users could always use Maps while logged out of their Google profiles and manually delete stored Echo messages by logging into their Amazon accounts. The recently announced features are faster and require less searching through menus, but privacy and ethical-design experts are unimpressed by the measures. While useful, they argue, the new policies do very little to shift the needle.

Mona Sloane is a professor at the Tandon School of Engineering at New York University, where she researches ethical design principles in engineering and AI. The first problem with privacy features that can only pause, not stop, surveillance, she argues, is that users have to enable these features themselves. “Outsourcing that is a way of circumnavigating and avoiding responsibility,” Sloane says. “It’s a way of maintaining the core business model, which is generating as much data as possible.”

For both products, the default remains: You can pause tracking, but you can’t stop it. Users can’t tell Alexa never to store their Echo commands to begin with, nor can they preemptively tell Google to never track them while they’re logged into the Maps app.

The new shortcuts seem targeted at a user base that’s fed up after two years of big-tech privacy breaches and surveillance scandals. In April, Amazon Echo users were shocked to find out that their recorded voices are collected and sent to human contract workers for analysis. Many Google Nest owners learned that their products shipped with functional microphones via a February tweet. And Facebook is still revamping its public image since news broke of Cambridge Analytica harvesting user data for a national influence campaign last year.

[Read: B]ehind every robot is a human

Amazon and Google’s feature announcements, one privacy expert argues, are part of a Silicon Valley campaign to regain trust. “It’s privacy as a promotional tool,” says Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project at the Urban Justice Center. Cahn says that consumers’ perceptions of risk and reward when they’re shopping for smart products are changing. With each new scandal, wary shoppers become more convinced that it’s not worth wagering their privacy to use tracking software. These incognito-style moves are, in Cahn’s opinion, designed as damage control to get back in buyers’ good graces.

Cahn described shoppers’ new caution as an “existential threat” to tech companies. “They’re fearful of what lawmakers will do,” he says, “and they’re trying to win back our trust with these measures that create the illusion of privacy but don’t threaten their core business model.”

Those business models, some scholars argue, often become tech companies’ ethics, shaping how they design their products. Ben Wagner, a professor at the Vienna University of Economics and Business, has found in his research that companies do embrace ethical principles after privacy backlashes. But those principles don’t necessarily adhere to the same values as consumers do. Wagner wrote about the risks of this disconnect in 2018, noting that firms regularly engage in “ethics washing” and “ethics shopping,” phrases borrowed from an earlier EU report on governing AI.

Both terms are useful here. Ethics shopping is when companies pick and choose self-guided frameworks to uphold based on what suits their bottom lines. It’s only possible when regulation over an industry is weak or patchwork, as it is with AI and data collection. Ethics washing refers to the tactics companies employ to avoid substantial oversight, like announcing advisory boards, employing token measures like incognito modes, and even calling for the government to (lightly) rein in the industry. Such measures are often symbolic: The advisory boards may or may not have the authority to influence products and can be terminated without notice. The stopgap measures still require users to be vigilant about their settings. A company can call for government oversight, and at the same time, as Microsoft recently did, lobby against specific policies it doesn’t want.

Tech companies are moving to become even more deeply embedded in our lives, which will only amplify the risks posed by security breaches. Google has filed patents for speakers that listen to you brushing your teeth and smart cameras that scan your clothes to make product recommendations. Amazon has patented technology that will infer your health and emotional state based on what it overhears and offer product recommendations accordingly.

[Read: Welcome to the age of privacy nihilism]

With these devices designed to collect and store more data than ever, Sloane says, it’s important to spotlight what’s working well, especially tools that are developed in consultation with users and display a “notion of care for all participants.” Good privacy tools will let users orient themselves within the data ecosystem with their own ethical principles as a guide. They will leave it up to users whether they want their data collected, how they want it re-used or appropriated, and most crucially, whether they want to exist in the ecosystem at all.



from Technology | The Atlantic http://bit.ly/2QJ5qQ1