For evidence of this, one need only look at Facebook and its ad targeting options.
In advance of May 25, Facebook asked its users to tell it whether any “political, religious, and relationship information” they had shared with the social network should continue to be stored or displayed. This was clearly done to ensure compliance with special GDPR rules around sensitive information, including information that has human rights implications.
As the ICO explained, “This type of data could create more significant risks to a person’s fundamental rights and freedoms, for example, by putting them at risk of unlawful discrimination.”
But an investigation conducted by The Guardian and the Danish Broadcasting Corporation found that while Facebook gave users the ability to control what would happen to the political, religious or relationship information they had explicitly shared with Facebook, the world’s largest social network is still allowing advertisers to target users based on interests it infers based on users’ behavioral data, such as Facebook Pages they like.
In some cases, these inferred interests could allow Facebook and advertisers to target users based on information they opted not to have the company retain or display.
According to Facebook, however, the ad interests it generates are different than explicit associations users provide.
“Like other internet companies, Facebook shows ads based on topics we think people might be interested in, but without using sensitive personal data,” the company told The Guardian. “This means that someone could have an ad interest listed as gay pride because they have liked a Pride-associated page or clicked a Pride ad, but it does not reflect any personal characteristics such as gender or sexuality.”
As Facebook sees it, “Our advertising complies with relevant EU law and, like other companies, we are preparing for the GDPR to ensure we are compliant when it comes into force.”
The company does offer users the ability to remove some of their ad interests but not only is it not clear how many users know about this, it’s not clear how many users are really aware of the fact the Facebook is using actions such as Likes to generate these inferred interests in the first place.
From this perspective, there’s a debate to be had about Facebook’s position and whether it truly represents GDPR compliance. Specifically, Facebook’s position seems to implicate Article 22 of the GDPR, which forbids any “decision based solely on automated processing, including profiling, which produces legal effects concerning [a data subject] or similarly significantly affects [the data subject].”
This is going to be a big battleground. We talk about inferred special category data in the context of the Article 29 Working Party guidelines on profiling here > https://t.co/TjCHqOvcBA (open access) https://t.co/XZB5ypjELs
— Michael Veale (@mikarv) May 16, 2018
The letter of the law versus the spirit of the law
The Guardian’s Alex Hern notes that the targeting it identified is “reminiscent of Facebook’s previous attempts to skirt the line between profiling users and profiling their interests. In 2016 it was revealed that the company had created a tool for ‘racial affinity targeting’.”
The operative phrase here is “skirt the line.” Facebook and other large companies that rely on targeted advertising to drive the bulk of their revenue have literally tens of billions of incentives (in the form of dollars) to adhere to the letter of the law but avoid adhering to the spirit of the law if it benefits them financially.
The big question is where the proverbial line is. The answer: nobody knows. And for that reason, it’s likely that it probably won’t take too long for early battles to emerge over what the GDPR actually requires.
For ad-supported companies like Facebook, the outcome of those battles could very well determine the fate of their businesses as they currently exist.