Why It’s So Hard to Stop Discriminatory Ads Online

As people live more of their lives online, the necessity of figuring out how to extend offline protections to virtual practices becomes even more important. One way in which this problem is evident is bullying. Schools have long had punitive systems in place that, though far from perfect, sought to make their classrooms and hallways safe environments. Extending those same systems to the online world has been a significant challenge—how can they monitor what happens online and in private? And what’s the appropriate punishment for bad behavior that happens on the internet?

Another area that has proven difficult for this act of offline-to-online translation is the rules that protect Americans from discriminatory advertising. The internet is chock-full of ads, many of which are uncomplicated efforts to get people to buy more home goods and see more movies and so on. But things get a lot more tricky when the goods being advertised—such as housing, jobs, and credit—are those that have histories of being off-limits to women, black people, and other minorities. For these industries, the federal government has sought to make sure that advertisers do not help further historical oppression, via laws such as the Fair Housing Act and the Equal Credit Opportunity Act.

By design, many social-media companies and other websites have a ton of data about who users are and what they’re interested in. For advertisers, that makes promoting goods on those sites particularly appealing, since they can aim their ads at the narrow slice of people who might be interested in their products. But targeting, taken too far, can be the same as discrimination. And while it’s perfectly legal to advertise men’s clothing only to men, it’s completely illegal to advertise most jobs exclusively to that same group.

For businesses like Facebook and other platforms where companies advertise, that can create a challenge. They must figure out how to avoid discriminatory ads while remaining attractive to advertisers.

The weaknesses in virtual protections have become quite apparent. In the fall of 2016, a ProPublica investigation concluded that Facebook’s advertising platform had some serious deficiencies. The option for advertisers to target users based on their assigned “ethnic affinity,” the piece said, made it possible for companies to exclude entire groups of people from viewing their ads in a way that was not only ethically dubious, but also may have run afoul of civil-rights laws. While Facebook has denied any legal wrongdoing, the company announced several changes to their advertising platform in February—including renaming the ethnic-affinity designation (to “multicultural affinity”) and preventing the use of the category for ads related to housing, credit, and jobs.

For Facebook and some other platforms, advertising revenues were incorporated into their business plans without, they claim, compromising on their egalitarian mission statements or crossing any legal lines. They have done so by posting generic advertising agreements and making advertisers say they will abide by anti-discrimination clauses. Some agreements are also intended to prevent more generic forms of scamming and false advertising. But it’s difficult to monitor whether advertisers actually comply—ads are generally coordinated by algorithms. Thus, as sites grow and bring in ever more money, these platforms must choose to what extent greater profits are worth running the risk of discrimination, insofar as the value of the advertising somewhat hinges on how precise the targeting can be.

Steve Satterfield, the manager of privacy and public policy at Facebook, told me that the site currently has around 4 million advertisers. When it comes to addressing targeted ads that might impinge on civil rights, Satterfield says, “it is a hard thing to identify those ads and to be able to take action on them.” That’s because not every ad that targets users based on race or ethnicity is exclusionary, and not every type of ad falls within the purview of federal civil-rights law.  

By and large, Americans have gotten used to the idea that ads are crafted to reach specific groups in specific ways: Ads for beer appear during sports games, while ads for toy stores pop up during children’s programs. Sites that cull data from users’ behavior and content offer advertisers even more customization. Aaron Rieke, a principal at the technology consulting firm Upturn, says that it’s pretty common practice for marketers to use information such as geography and census data to piece together information about racial groups—which means that platforms can enable discrimination even if they don’t give advertisers the sort of explicit “ethnic affinity” option that Facebook once did.

Doc Searls, the founder of ProjectVRM at Harvard, which works on issues of standards and protocols for technology, says that the world that Facebook and some of its other social-media brethren inhabit, which includes mining users’ every interaction on a platform for data about who they are and what they are interested in—is an increasingly appealing option for advertisers, but a potentially problematic one when it comes to protecting users’ rights.

The advertising these platforms offer is a significant departure from how marketing worked for a long time, Searls says. “An important thing about advertising of the traditional kind, the kind that Madison Avenue practiced for more than 100 years, is that it's not personal. It's aimed at large populations. If you want to reach black people, you go to Ebony back in the day. And if you wanted to reach camera people, you went to a camera magazine,” he told me. “The profiling was pretty minimal, and it was never personal.”

Prior to civil-rights laws, advertisers could be blatant about who they were trying to attract or reject. They could, for instance, say that minorities weren’t allowed to move into a neighborhood, or that women weren’t invited to apply for jobs. That meant that minorities and women endured less-favorable options when it came to housing, loans, and jobs. The Fair Housing Act, enacted in 1968, and the Equal Credit Opportunity Act, enacted in 1974, made it illegal to withhold promotions for housing or credit, or differentiate offers, based on characteristics such as race, ethnicity, or sex.

These laws, along with the fact that many ads are never actually vetted by human eyes, but rather run through an algorithm before posting, makes the culpability of Facebook and other social-media platforms hard to determine, in a legal sense. “The question of when, if ever, Facebook as the platform that carries those advertisements becomes legally complicit is complex,” says Rieke.

When it comes to assessing culpability in the realm of online discrimination, the Communications Decency Act is often used to determine whether or not internet platforms are at fault for illegal content that appears on their sites. The law, passed in 1996, essentially says that platforms that host a ton of user-uploaded content, such as Facebook, YouTube, or Craigslist, can’t generally be held responsible for a user posting something that is discriminatory, according to Olivier Sylvain, a professor at Fordham Law School.

But posting paid advertising that violates anti-discrimination laws is different, Sylvain says: “They are on the hook when they contribute one way or another in their design and the way in which the information is elicited.” One example that helps to illustrate the limits of the protections offered to companies by the Communications Decency Act (CDA) involved a website called Roommates.com. The platform, a forum to help individuals find roommates, was sued for violating the Fair Housing Act by allegedly allowing for gender discrimination in housing. A court ruled that because the site’s design required users to fill in fields about gender in order to post, it couldn’t rely on immunity offered by CDA as a defense. Roomates.com ultimately won its lawsuit, but the platform now makes adding information about gender optional. (Roomates.com did not respond to a request for comment.)

But a lot of times the role of the platform is more subtle. Often sites don’t require advertisers to perform a discriminatory act—they just don’t successfully ensure that they can’t. And whether that makes them liable is far from settled.

One solution is that the industry could ease up on targeting. This is not as profit-unfriendly as it sounds: Searls is of the of the mind that increasingly specific tracking isn’t the most enduringly profitable path for advertisers anyway. “Targeting doesn't work,” he said, before adding some nuance. “I should put it this way: The more targeted an ad is, the creepier it is and the more likely people are to resist it and block it.” That creepiness factor could lead to a shift in the supply and demand dynamics of advertising, as users ramp up their use of ad-blocking software. He thinks that bad publicity about racially targeted ads is a sign of more general pushback against targeting to come.

This may well come true someday, but it seems unlikely that it will happen anytime soon. In the meantime, advertisers’ activities remain relatively unchecked. Perhaps one way to reduce discrimination is for users to be given some say. Google, for instance, has created an ad-settings page that aims to let users have some control over the profiles the company builds about them, and thus the ads that they are served. In theory, this could be a neat solution.

In practice, though—at least early iterations of the tool—proved to be in some ways inefficient. A 2015 study from Carnegie Mellon University investigated how the tool performed, how transparent the practices of advertisers were, and whether or not the opportunity for discrimination in advertising would persist, despite users’ greater ability to control the ads they were seeing. What they found was cause for concern. The study indicated a statistically significant difference in ads shown to men and women whose profiles suggested they were looking for jobs, with men being much more frequently targeted for ads offering high-paying jobs than women were.

Since 2015, Google’s Ad Settings page has gotten some additional updates and a spokesperson for the company wrote in an email, “Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed. We provide transparency to users with ‘Why This Ad’ notices and Ad Settings, as well as the ability to opt out of interest-based ads.” Still, it seems that even with the best intentions, there’s still much work to be done when it comes to giving users more control as the antidote to bad ads.

This shifts attention back onto the sites that host advertisements. Cynthia Dwork, a computer scientist who does research at Microsoft and at Harvard University, is trying to take a systems-based approach to studying fairness in algorithms—starting with those used for placing ads.

The initial question of her work centered around how to run a fair advertising platform. That question is difficult to answer since advertisers often aren’t targeting ads based on explicitly discriminatory information, which makes nailing down intent slippery, Dwork told me. One possibility would be for social-media companies to place more restrictions on what information can be used in targeting an ad. The trouble there is that they don’t want to expressly tell advertisers (their customers) what to do, or limit their ability to target audiences based on market research, so long as they don’t appear to be engaging in unfair practices.

“Even defining fairness is complex,” Dwork said. She gave an example about choosing a set of applicants for a job interview. To make the selection of that group fair, she said, one might say that the group must reflect the demographics of the country at large. But if the company were to have a search process not fully attuned to the diversity of talent and select only weak applicants from certain minority groups, it would ensure that they don’t get the position. In that instance, the fairness exists in appearance only. That’s why culturally aware systems are necessary, she says—better understandings of actual, fair similarities can be deduced. She gives another example to illustrate this point: Smart minority children might be steered towards studying math, while smart white kids might be steered specifically toward finance. If an algorithm looking for promising students isn’t aware that a similarity in aptitude but a difference in culture, and thus field of study, exists, it might miss an entire group of students. A smarter algorithm would take this into consideration and view both groups of students similarly.

“Without a mathematics to capture values of interest to society, such as fairness, we quite literally do not know what we are building,” she told me. Dwork says that’s why she’s worried about getting it right, but there’s also a need to move quickly. “I’m concerned that the theory will be too late to influence practice, and that ‘values’ will too often be viewed as ‘getting in the way’ of science or profit,” she said.

It is hard to imagine social-media companies, which derive so much of their revenues from highly-targeted advertising, doing anything that gives their customers less information to act on. Indeed, Rieke doesn’t think that the coming years will involve collecting, or selling, less data. “I don't see sites making less use of their users' data in the future for marketing purposes,” he says. That means the work of researchers such as Dwork and those at companies like Facebook will become all the more important in shaping and implementing policies that can create a more equitable internet, even as they create a more profitable one, too.

This is about more than just advertising. In 2016, the rental platform Airbnb faced accusations that hosts on their site were discriminating by refusing reservations for black users. To address this, the company has said it will put new antidiscrimination clauses in place, change booking policies, and punish hosts who improperly reject potential guests. Ride-hailing companies have faced similar accusations of discrimination by those using their platforms. On the whole, it seems that many technology-based companies have failed to consider the diversity of users when designing and building their platforms. In order to keep growing and retaining the many different people who use their sites, they’ll have to come up with a solution—quickly.



from Technology | The Atlantic http://ift.tt/2lZcibX

Related Posts