When Research Data Escapes the University

If there are 20 people in a coffee shop, then there are at least 21 cameras: One embedded in each phone, and, usually at least, one tucked high in the corner. What you say may be overheard and tweeted; you might even appear in the background of another’s patron’s selfie or Skype session. But that doesn’t stop even the most privacy-wary people from entering coffee shops. They accept the risk inherent in entering a public place.

This notion—of a “reasonable” expectation of privacy—guides researchers hoping to observe subjects in public.  But the very idea of what’s reasonable is a complicated one. Faculty at three universities—Duke, Stanford, and University of Colorado Colorado Springs—are currently facing backlash after creating databases built using surveillance footage of students as they walked through cafes and on college campuses. You might reasonably expect being overheard in a coffeeshop, but that’s different than suddenly becoming a research subject, part of a data set that can live forever.

Ethics boards approved all three research projects, which used student data to refine machine learning algorithms.  Duke University researcher Carlo Tomasi declined an interview with The Atlantic, but said in a statement to the Duke Chronicle that he “genuinely thought” he was following IRB (Institutional Review Board) guidelines. For their research, he and his colleagues placed posters at all entrances to the public area, telling people they were being recorded, and providing contact information should they want their data erased. No one reached out, Tomasi told the Chronicle.

But when the parameters of his research changed, Tomasi admits he didn’t inform the IRB. For minor changes, that’s allowed. But Tomasi got permission to record indoors, not outdoors. And more significantly, he promised to allow access to the database only upon request. Instead, he opened it to anyone to download, he admitted to the Chronicle. “IRB is not to be blamed, as I failed to consult them at critical junctures. I take full responsibility for my mistakes, and I apologize to all the people who were recorded and to Duke for their consequences,” his statement reads.

Duke ultimately decided to delete the dataset related to the research. Stanford did the same thing with a similarly derived data set its researchers created from patrons filmed at a San Francisco cafe. At UCCS, where researchers recorded students to test identification software, the lead researcher claims the team never collected individually identifying information. Researchers for the Stanford and UCCS projects didn’t respond to requests for comment. In separate statements, each university reiterated that ethics boards approved all research, and underscored their commitment to student privacy.

But the problem is that university ethics boards are inherently limited in their scope. They oversee certain, narrow aspects of how research is conducted, but not always where it ends up. And in the information age, the vast majority of academic research goes online, and what’s online lives forever. Other researchers, unbound by IRB standards, could download the database and use it how they wish, introducing all manner of consequences for people with no way of being informed or offering consent.

Those consequences can be far beyond what researchers imagine. Adam Harvey, a counter-surveillance expert in Germany, found more than 100 machine learning projects across the globe that cited Duke’s database. He created a map that tracked the spread of the dataset around the world like a flight tracker, with long blue lines extending from Duke University in every direction. Universities, startups and institutions worldwide used the dataset, including Sensetime and Megvii, Chinese surveillance firms linked to the state repression of Muslim minorities in China.

Every time a database is accessed for a new project, the intention, scope and potential for harm changes. The portability and pliability of data meets the speed of the internet, massively expanding the possibilities of any one research project, and scaling the risk far beyond what any one university can be held accountable for. For better or worse, they can only regulate the intentions of the original researchers.

The federal government’s Office for Human Research Protections explicitly asks board members not to consider “possible long-range effects of applying knowledge gained in the research.” Instead, they’re asked to focus only on the subjects actually directly involved in a study. And if those subjects are largely anonymous people briefly idling in a public space, there’s no reason to believe they’ve been explicitly harmed.

“It’s just not what [the IRB] was designed to do,” said Michelle Meyer, a bioethicist who chairs the IRB Leadership Committee at Geisinger, a major healthcare provider in Philadelphia. As she explains, the IRB’s main privacy concern for publicly observed research is whether subjects are individually identified, and if being identified places them at risk of financial or medical harm. “In theory, if you were creating a nuclear bomb and…[conducting research that] involved surveying or interviewing human subjects,” she says, “the risks that the IRB would be considering would be the risks to people immediately involved in the project, not the risk of nuclear annihilation downstream.”

Opening up datasets for other researchers increases those downstream risks. But, the IRB may not have much jurisdiction here; data sharing, fundamentally, is not research. The after-the-fact application of data is not itself research, so it’s “sort of in this weird regulatory twilight zone,” Meyer explained.

Casey Fiesler, assistant professor in the Department of Information Science at the University of Colorado Boulder, writes on the ethics of using public data in research studies. Fiesler proposes a system for scrutinizing database access that’s similar to copyright use. Fair use clauses are subjective, she notes, but have standards based on how the requestor plans to use the material.

“Having some kind of gatekeeper for these datasets is a good idea,” she says,” because [requesters] can have access if you tell us what you’ll do with it.” There are similar rules in place for open source software and Creative Commons intellectual property, a permission-based system where requesters can only use media for noncommercial work that builds on the original without copying it, and are liable if they lie or misrepresent their intentions. Those are subjective metrics that don’t immediately jive with the highly bureaucratized academic landscape, but can be useful at least in trying to imagine cutting off downstream harm. “This isn’t to suggest [burdensome] rules, but it suggests a way that you should take certain contextual factors into account when you’re making decisions about what you're going to do,” Fiesler said.  



from Technology | The Atlantic https://ift.tt/2X93d1I