They wanted to expose how Facebook allowed advertisers to discriminate based on race. So they used Facebook’s advertising tools to exclude certain users from seeing the ad: African Americans, Asian Americans, and Hispanics.
This was clearly a violation of the Fair Housing Act, which says real estate ads can’t show a preference for people based on race, color, religion, sex, disability, or familial status.
Housing discrimination happens all the time. But the shocking thing here was just how visibly the tool let advertisers discriminate. On Facebook, all they had to do was check a box.
It was reminiscent the overt systems that forced housing segregation in America, now and in the past. In the 1940s, the federal housing administration refused to back home loans for black people — and those who wanted to live near black people.
Nowadays, it’s usually more nuanced. As housing advocate Erin Boggs pointed out, there are towns that give affordable housing preference to people who already live there — which often means predominantly white neighborhoods stay that way:
After the ProPublica story was published, Facebook said they’d work on a fix.
That was a clearly visible problem. The scary problems are invisible.
Every time you “like” something on Facebook or search something on Google, these companies learn a little bit more about you — and provide a slightly more personalized experience based on that. And people of different races and classes have different needs and constraints, so we all use the internet differently. In turn, we get an even more personalized experience.
This cartoon is about how those personalized experiences impact whether or not we get real-world opportunities — like the chance to move to a better neighborhood. This story could very well have been about job listings or college advertisements, but it seems apt to talk about how algorithms have sorted us online and, in turn, real life.
“When the nature of the discrimination comes about through personalization, it’s hard to know you’re being discriminated against,” said Michael Tschantz, a Carnegie Mellon researcher.
For example, let’s say a developer creates hipster condos and creates an ad — and targets Pearl Jam fans.
He does this because he thinks it fits the “aesthetic.” That in itself is fine.
But if most Pearl Jam fans are white, then the ad has excluded nonwhite people from seeing the ad.
This is the example ACLU attorney Rachel Goodman gave when describing why this problem is so hard: “The Facebook advertising stuff is important. But I think there’s huge potential even without those explicit categories — either intentionally or unintentionally.”
This could affect a wide swath of marginalized people, but the group that is especially vulnerable to housing discrimination is black Americans. Racist policies of our past created two divergent Americas: largely middle-class neighborhoods for white people and lower-class neighborhoods for black people. And the civil rights movement did little to help black families overcome this discrimination and get out of poor neighborhoods.
How we know algorithms aren’t giving us equal opportunities: an experiment
Last year, Tschantz and fellow Carnegie Mellon researcher Amit Datta wanted to know more about how these advertising platforms made decisions about users.
“There’s not a very good understanding or transparency about how these decisions are made,” Datta told me.
They decided to test the Google Ad ecosystem, which uses information it gathers about you to show personalized ads. The researchers wanted to test a simple question: How do Google’s assumptions about us affect the ads we see?
The only way to find this out was to simulate the experiences of 500 internet users using a computer program. For each of these experiences, the researchers found ways to tell Google about themselves. For example, there was a toggle in the settings menu to set each user’s gender — so that let them test how males and females were treated differently when looking for a job.
From there, they visited two news websites and they observed what kinds of ads Google showed them. What they found was a clear discrimination based on gender: The male users were much more likely than women to see ads for executive-level career coaching.
Testing gender differences is one thing, because Google allowed for the option to set one’s gender. But Google never asks for your race. Instead, it likely makes decisions based on your internet activity and social connections.
So if you searched for “Pearl Jam,” the algorithm might decide to group you with others who did the same — who are happen to be mostly white people. And the next time someone advertises a hipster condo development, people of color may not get the opportunity to see this ad.
How we dealt with discriminatory housing ads in newspapers
One good way to think about how to fight these discriminatory ads is by looking at how we fought them in the past.
In the late-1980s, almost all the people in the New York Times’ real estate ads were white people, which insinuated those neighborhoods were exclusively for white people.
Under the Fair Housing Act, a person of color could say that an ad was exclusionary — and they wouldn’t have to prove whether or not the advertiser intended to discriminate. (This part of the law was strengthened in a recent Supreme Court case.)
But going after each individual advertiser would’ve been like playing whack-a-mole. So in 1989, four black professionals brought a lawsuit against the Times, and said the newspaper should be liable for those discriminatory ads. The Times argued that their right to publish these ads was protected under the First Amendment, but a federal court disagreed and said the newspaper was liable.
This eventually led to the Times agreeing to review ads for racial preference. It was a big win for fair housing advocates.
“It’s much easier to shift the culture on things like advertising when you can attack the problem upstream rather than downstream,” said Tom Silverstein, a lawyer for the Fair Housing and Community Development Project.
The two ways we dealt with them on the internet
But building on that New York Times case, there are two general standards that help us deal with discriminatory housing ads on the internet.
The first is that websites are protected by something called the Communications Decency Act, which says that they aren’t responsible for user-submitted content. So if someone posts a housing ad on Craigslist that says “white only,” the courts said in 2006 that Craigslist isn’t liable — and that makes it impossible to tackle this problem “upstream.”
But the second standard is that websites can’t give users the specific tools to discriminate.
In 2008, a housing ad site, Roommates.com, had a feature that asked users if they had a racial preference for a roommate. The courts said that feature wasn’t protected by the Communications Decency Act and therefore made the site liable.
But here’s the kicker: Roommates.com also had a freeform text box, like Craigslist, that allowed users to put in additional comments. And in that area, the courts said users could write whatever they wanted — including “white roommate preferred” or “Pearl Jam fans only” — and the website wouldn’t be liable.
This is helpful in dealing with blatant discrimination. But now it’s more pernicious.
When Facebook was caught giving advertisers the ability the discriminate based on race, it was a rare moment in which advocates could point to explicit tools that could make housing discrimination worse. Tschantz, the Carnegie Mellon researcher, said these cases are relatively easy problems to solve. We can stop humans from blatantly discriminating.
But play with the interactive below, which lets you target the hipster condos, and you can see how our cultural inclinations make it easy to use these types of tools to discriminate:
“I do think we’re entering an era in which discrimination can be more pernicious,” Tschantz said.
Most of the experts I talked to said that’s the hard problem. Tackling this problem upstream requires us to figure out when discrimination is happening — and then to figure out how to solve the problem upstream with companies like Google and Facebook.
“I’m interested to see how they do it, to become a model for other internet providers to do the same thing,” said Joe Rich, co-director of the Fair Housing and Community Development Project.
This court case could help us know if algorithms are discriminating
Even if we knew what was happening inside these targeted advertising algorithms, it wouldn’t necessarily tell us if they are discriminating or not.
The way to find out is the same way we find out if a real estate agency is discriminating: You send in test users of difference ethnicities to use the system and then you see if they’re treated differently.
“Outcomes means testing is the way to understand whether discrimination is happening,” said Goodman, the ACLU lawyer.
In both the real-life and digital versions, the testers don’t care about intent. They only care about whether the algorithm or the real estate agency has a bias.
That’s what two researchers — the University of Michigan’s Christian Sandvig and the University of Illinois’ Karrie Karahalios — sought to do. They wanted to create a computer program that tests whether real estate websites discriminate against people by race.
But when the researchers tried to conduct these tests, they realized doing so would be a crime.
That’s because they’d have to create multiple fake profiles to test discrimination for various demographics, and that would violate these websites’ terms of service — a crime under the Computer Fraud and Abuse Act.
That’s how the fight came to this front.
The ACLU this year filed a complaint on behalf of those researchers, as well as others, in Sandvig v. Lynch. They are challenging the law that makes it a crime to violate a website’s terms of service because, as Goodman said, “it creates a chilling effect for research into the kind of discrimination we’re talking about here.”
While this case is crucial for empowering outside entities to keep these systems in check, this could be another version of whack-a-mole. Much like before, the question is how to figure out a way to stop these ads “upstream,” before they reach consumers. That’s why several advocates and researchers I talked to said it’ll be crucial to figure out how companies can do self-testing themselves.
“These large-scale systems are getting so complicated that [companies are] having a hard time governing them,” Tschantz said. “What I’m hoping happens moving forward is these companies develop internal mechanisms before they are actually affecting outside users.”
And if you’re really optimistic, you could see how these tools could actually work toward solving the very problems we’re worried about.
For example, it’s been incredibly difficult to help black Americans leave neighborhoods of concentrated poverty. That’s worrisome because research now shows living in those places has a disparate impact on virtually everything: health, education, and even happiness. In fact, a landmark study by NYU’s Patrick Sharkey showed that, even after the civil rights movement, the number of black children who grow up in very poor neighborhoods hasn’t changed:
But as Silverstein said, “Some of these tools, if they were being used in a civil rights conscious framework, could actually be helpful for breaking down barriers to segregation.”
In other words, these algorithms have the power to further entrench where we live — so that also means we can design them to push the other way, too.