Two events this week got me thinking of privacy harms. Now, I know the mere mention of “privacy harms” brings with it a lot of baggage and a ton of research, legal uncertainty, opinion and, well…ambiguity. I couldn’t possibly link to all the countless scholars, lawyers and activists who have tackled the question of what does, or does not, count as a harm, but I couldn’t help think that we may have seen some tangible harms we can all agree on this week.
In the privacy community, perhaps the most obvious incident involved OfficeMax. It was heavily reported that a couple who lost their daughter in a car crash last year received a piece of mail from the office supply chain with “Daughter Killed In A Car Crash Or Current Business” on the second line of the address.
I cannot imagine what they must have felt.
Let that sink in for a second. A father lost his daughter and OfficeMax or its provider used this fact to select how to market to him. This happens all of the time. But in this case, OfficeMax wrote what it was doing on the envelope, reducing the Seay family to some category peddled by data brokers. On paper, for the world to see.
Can we safely say this was a tangible harm?
Most, I think it’s safe to say, would agree this was traumatic for the family and certainly gets the spotlight already shining on the data broker industry to glow intensely hot. According to OfficeMax, they “unintentionally” purchased the information from a third-party data broker. As the IAPP’s Trevor Hughes, CIPP, pointed out, this could have been a case where the company was trying to be sympathetic. But without any transparency from much of the data broker industry, as Calo said, being placed within a highly sensitive marketing column like that is beyond Kafkaesque.
If you’ve read one of my past posts on Big Data drinking games, you’ll know I am deliberately trying to avoid using the word creepy here. A couple years back, scholar and legal expert Adam Thierer wrote an excellent piece “On ‘Creepiness’ as the Standard of Review in Privacy Debates.” In it, he writes:
I think we’d be better served by moving privacy deliberations—at least the legal ones—away from “creepiness” and toward a serious discussion about what constitutes actual harm in these contexts. While there will be plenty of subjective squabbles over the definition of “harm” as it relates to online privacy/reputation, I believe we can be more concrete about it than continuing these silly debates about “creepiness,” which could not possibly be any more open-ended and subjective.
So is the above OfficeMax incident an example of actual harm? I’d like to hear your thoughts on this.
The second event this week that caught my attention took place in the chaotic, violent protests in Kiev, Ukraine. If you were in a specific geolocation with a smartphone, you received this message: “Dear subscriber, you are registered as a participant in a mass riot.” The nation recently passed harsh laws against public gatherings—including jail sentences for up to 15 years. Putting the tenets of democracy aside, the geolocation of an individual in the Ukraine pretty obviously puts them in serious peril in this case, regardless of whether they’re breaking a law (an anti-democratic law, at that) or not.
Obviously I support the right of the protestors to assemble, but what if you were just a random person going about your day on the side of the street and received that text message? Does the dark pit of fear that developed in your stomach count as harm? What if you were falsely accused based simply on your phone’s location?
Though this is an extreme case, it gives ammunition to those who argue thatdata accuracy and state surveillance put everyone in potential peril. By no means am I arguing that the U.S. will eventually end up in a similar situation. But the NSA disclosures have generated a lot of mistrust of the government and U.S. businesses around the world. It’s also important, in my opinion, to look at these two situations—one in the private sector with the data broker industry, the other in the public sector with state surveillance—and really think about what kind of society we want to live in and what kind of landscape we are facing.
Do the benefits really outweigh the potential harms?
In a recent interview with PBS Newshour about the balance of privacy and Big Data, Future of Privacy Forum Director Jules Polonetsky, CIPP/US, said, “If companies want to be intimate with us, they need to be transparent with us.” I think something similar could be said about the government. If the government wants to protect us, they need to be accountable to us.
This lack of transparency and accountability only breeds mistrust—whether it’s manifest in headlines or in the streets—and could ultimately lead to real harm, not just to individual consumers and protestors, but to entire businesses and governments alike.