TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | The ownership and exploitation of personal identity in the new media age Related reading: A view from Brussels: EDPS sends signal on data transfers 

rss_feed

""

Who owns the information that is subject to privacy law and regulation? Ownership is the right to exclude. If you own a piece of real estate, you can exclude others from entering it. If you own a copyright in a book, you can exclude others from copying the book.

Is the same true of personally identifiable information (PII)? There is no doubt that the collection, use and distribution of PII is a highly regulated activity in which the individual data subject has substantial rights. But does the individual own his or her PII? Can the individual exclude others from collecting, using and distributing it? Or is it owned by the companies that aggregate and exploit the information?

The ownership of PII and digital identity

PII cascades from every visit to a website, from every transaction on the Internet, from every Tweet and from every fact stored in the cloud. The information rushes effortlessly over international boundaries and transactions, and companies assiduously track it. “Real-time ad bidding”—associating online advertisements with browsing history—is fundamental to the business models of companies such as Google and Amazon, and the value of this information in Google’s hands—or Facebook’s, or Amazon’s—is directly measured by the lofty heights of their market capitalizations. A recent study concluded that since November 2010, behavioral tracking has increased 400 percent. The study found that on average, a visit to a website triggers 56 instances of data collection.

Who owns this information? The answer is surprising. Customer lists are considered a classic example of trade secret and are clearly owned by whoever assembles and maintains them. Unlawful appropriation of such information is subject to both civil and criminal sanctions. There is very little doubt that any additional information gleaned from a web visit—what options I considered, how long I was on the site, how I paid, to whom the fruit was delivered, where I was when I placed the order—would be included in the information that the web host could protect as its property.

The law is quite explicit on this point. In Europe, the Database Directive, EU Directive 96/9/EC, adopted by the European Commission in 1996, requires member states to provide protection for databases. UK regulations implementing the Database Directive expressly state, “A property right (database right) subsists, in accordance with this part, in a database if there has been a substantial investment in obtaining, verifying or presenting the contents of the database.” Similarly, the U.S. Copyright Act provides copyright protection for compilations so long as there is copyrightable authorship in their selection or arrangement.

Therefore, the individuals who generate the databases of PII that comprise their digital identities do not own the databases and therefore, in a very real sense, do not own their own identities. One might think of a person’s digital identity by analogy to a pointilliste painting. Thousands upon thousands of tiny bits of digital information about an individual, including what we have called basic facts, sensitive facts and transactional facts, can be assembled to form a picture of the individual, his likes, dislikes, predispositions, resources. The compilations of facts that comprise my digital identities are subject to ownership, but the owner is not me; it’s the compiler.

The conclusion that one does not own one’s own identity might seem jarring at first. On further reflection, though, one realizes that the lack of ownership over one’s digital identity is not very different from one’s identity outside of the digital space. Think of identity as reputation. Do I own my reputation?  I have a reasonably broad opportunity to mold my reputation by word and deed, and I have legal redress if my reputation is unfairly tarnished, but I cannot prevent others from knowing who I am. Because I have no right to exclude, I do not own my identity. It is in a sense community property. The individual builds it, at least in part, but members of the community can then use it—or decide that it would be unfair or unjust to use it—whether the individual wants them to or not.

We might return here to our watershed analogy. No one owns the raindrops falling on the watershed, but when value is created by damming streams of information, it can be owned and exploited by the persons building and running the dams. Eventually, of course, the information returns to the oceanic public domain.

The exploitation of identity

If it is jarring to consider that one’s digital identity is owned by others, not by one’s self, it is still more disturbing to consider that the owners can exploit your identity for their own profit. Hardly a day goes by without a new revelation about the unconsented exploitation of personal information or the unauthorized disclosure of sensitive personal information. The problem is not so much that companies have information about my purchases and habits. The problem is what they do with it and how they safeguard it.

Security breaches are distressingly common. In June, the FTC filed a complaint against Wyndham Hotels for security lapses that allowed hackers to access sensitive financial information of more than 600,000 individuals over a three-year period. In the same month, LinkedIn, eHarmony and Last.fm were all hacked, resulting in the release of millions of users' passwords, and just three months before, the credit card processor Global Payments reported that some 1.5 million Visa and MasterCard account numbers had been stolen by hackers. Even breaches of less sensitive information, such as that of Epsilon in which the e-mail addresses of millions of individuals were inadvertently disclosed, are disturbing and potentially harmful to individuals.

Many of the laws respecting PII can thus be understood as limitations on the ownership rights of the persons, usually businesses, that assemble them. One might analogize the collections of personally identifiable information that comprise one’s digital identities to what the law calls “dangerous instrumentalities.” It is okay to manufacture and own them; indeed, the creation of digital images of individuals represents the generation of an important new form of wealth in the Internet Age—but the use of such information must be regulated to avoid harm or unlawful exploitation.

That this is the case is manifest in examples of privacy legislation as diverse as the EU Directive, the Fair Credit Reporting Act, HIPAA, HITECH, Gramm-Leach-Bliley and the Massachusetts Data Security Regulations. In each case, the assembly of personal information about individuals is not prohibited. Quite to the contrary, such assemblies are affirmatively encouraged, particularly in the areas of financial services and healthcare, since the information can greatly improve the efficient and effective provision of financial and health. However, the use of the information, once assembled, is regulated and controlled.

Some modest proposals

For many, Internet commerce is like visiting a foreign country where the customs and etiquette are new and often disconcerting. To complicate matters further, the citizens of this country—the digital service providers—are making up their customs as they go along, and the pace of innovation, particularly in the realm of the collection and use of personally identifiable information, has outpaced the development of shared expectations as to what is acceptable and what is not.

A great deal of mischief has been caused by the tendency of companies to be cagey about their collection and exploitation of PII, leading to surprise and outrage when the facts come to light. A guidebook, with information about what to expect in terms of the collection and use of personal information, is important. Many of the recommendations of the FTC Framework and the White House’s proposed Privacy Bill of Rights can be understood and applauded in this context.

There is, however, a risk of regulatory overreaction to the collision of cultural expectations. Thanks to the collection of personal information and the assembly of digital identities, consumers obtain better and more personal service than would be possible without it. Consumers can be spared many irrelevant advertisements with which they would be bombarded, commercial television-style, if the information were not collected, and in the end, it is the exploitation of such information that makes so many free Internet services possible. The assembly of such information thus creates enormous new wealth in companies both large and small and facilitates economic activity for all the other enterprises that take advantage of the information. One does not want to kill the goose laying these golden eggs.

Beyond the economic risk, from the author’s parochial standpoint as an intellectual property lawyer, one must consider whether proposals for the creation of new property rights in personal data can be reconciled with long-established principles of intellectual property law that define the public domain. Without a rich source of raw material in the public domain, the creation of new inventions and works of authorship would be curtailed. Surely no one imagines that an individual should be able to prevent an author from using facts about the individual’s life to create a biography, but how does one distinguish this from the assembly of a digital identity by an Amazon? The author, like Amazon, wants to use the information for her own benefit and to sell the information to third parties to earn a profit. Can one make a reasoned distinction between the two activities? In a sense, Amazon’s activity is more benign, since it is quite unlikely to use the information in a way that would offend the individual. Furthermore, if there are “bad facts” about an individual—that he doesn’t pay his debts, that he has committed fraud—should he be able to suppress this information on the ground that it is his “property?”

There are compelling public policy reasons why certain types of information should not inform certain decisions. Race and religious affiliation are perhaps the most obvious examples. Making decisions about the extension of credit, employment, housing, lodging, transport, access to health services and other fundamental needs on the basis of race is, and should be, illegal. This is not because an individual owns the fact that she is of a particular race; it is because discrimination on the basis of race is heinous for more general social and historical reasons. In other words, it is the use of such information, not its assembly and distribution, which merits legal control.

We come, then, to my modest proposal: Wherever possible, regulate the potential misuse of personally identifiable information as opposed to its assembly and dissemination. Section 604(g) of the FCRA is an example of such regulation. It generally prohibits creditors from obtaining and using medical information in connection with any determination of the consumer’s eligibility, or continued eligibility, for credit. On the other hand, the statute contains no prohibition on creditors obtaining or using medical information for other purposes that are not in connection with a determination of the consumer’s eligibility, or continued eligibility for credit. One can imagine myriad purposes for obtaining such information, not the least of which is the provision of effective medical treatment.

Proposed regulation that prohibits entirely the collection of certain types of information, or permits it only with prior consent, seems to this author to be overbroad. The proposed Geolocation and Privacy Surveillance Act (S1212) and proposed Location Privacy Protection Act of 2011 (S1223), both of which would require prior express consent to the collection of geolocation information, and the proposed Best Practices Act of 2011, which would, among other things, require prior consent to collect sensitive personal information, are examples. It is not easy for me to understand how or why I can legitimately prevent people from knowing that I’m walking through a particular mall at a particular time when that fact is obvious to everyone else in the mall. On the other hand, the use of that information to, for example, rob my home when I’m not there, seems legitimately actionable. Again, it’s the use of information—not its collection—that deserves regulation.

Identity, and the personal information on which it is built, is inherently relational in nature. It is the opposite of anonymity, and one cannot have it both ways; you can achieve anonymity by refusing to interact with others, but once you begin to interact, you necessarily lose your anonymity and gain an identity, as you are perceived by and in your relations with others. At that point, your identity is not, and by necessity cannot be, your private possession.

Comments

If you want to comment on this post, you need to login.