Privacy Advisor

The healthcare privacy balance

November 1, 2012

Editor’s Note: In the following articles experts share perspectives on the questions and challenges surrounding healthcare IT and privacy.

Seeking a difficult balance:  The limits of privacy in the emerging healthcare IT ecology

Christiansen_JohnBy John Christiansen

It’s not always easy to strike the right balance between privacy and other values. In particular, because privacy is all about controlling access to personal information, it tends to be in tension with the value of information availability. This tension can become outright opposition in some cases. This article will discuss two healthcare situations in which that is the case, one at the operational level involving emergency access to information and the other at the policy level involving the shutdown of the HIPAA individual identifier by privacy advocates. The latter, in particular, is a story worth recalling because it may be causing the unintended consequence of increasing risks to personal information in the emerging nationwide health information network.

When discussing information security risks, it is probably easier to speak in terms of confidentiality rather than privacy. The three standard objectives of information security are confidentiality, integrity and availability—the “CIA triad.” Privacy in particular overlaps with confidentiality, as both have to do with control of access to personal information. Privacy can be defined as the legally recognized right of an individual to limit access to personal information, while confidentiality can be defined as a condition in which a party is subject to obligations to control access to information. Confidentiality, therefore, enforces the personal information access limitations that privacy laws and individual choices define.

Availability, on the other hand, is all about providing information where and when it is needed, both complete and accurate. Taken to its extreme, availability would not be constrained by any limitations other than the demands of the user. Availability and confidentiality are therefore in tension and may well come into opposition, and the question then becomes how to resolve the conflict.

Almost all security requirements are risk-based, meaning that some degree of risk is inevitable and must be accepted. Conflicting security objectives can therefore be resolved by balancing the relative tolerance for the risks associated with that objective. For example, it might be appropriate to accept a higher level of confidentiality risk if the harms they are likely to cause are materially less than the harms likely to be caused by availability errors. This can be demonstrated with an example from healthcare treatment operations.

The case for weak passwords in the emergency room

For healthcare purposes—and really, for almost all purposes—individual health and safety are and should generally be ranked the most important harms to avoid and, if material, are certainly more important than reputational or financial harms. This suggests a low tolerance for risks of availability errors for systems used to support real-time diagnosis and treatment: A failure to have necessary information available might cause a mistaken diagnosis and kill someone. A lower risk of availability errors might therefore be appropriately balanced against an increased risk of confidentiality errors, which might result in reputational or financial harm but are not likely to result in death.

The need for this kind of balancing act might be found, for example, in electronic health record (EHR) access in an emergency room, which is typically controlled by password. For most purposes, it is considered a best practice to require the use and of unique, mixed-alphanumeric and symbol passwords that are frequently changed to control access to protected health information, especially the kind of detailed, often sensitive information contained in an EHR. Because such passwords are hard to crack, they tend to reduce the risk of confidentiality errors resulting in unauthorized access.

However, by the same token, strong passwords are also hard to remember. Users who are unable to remember their passwords will be unable to access their EHRs, creating a risk of availability errors. A strong password policy that reduces the risk of confidentiality errors, therefore, also increases the risk of availability errors.

In some clinical settings it might be appropriate to forego strong passwords. For example, in an emergency room setting, clinical information may well be needed on literally a life-or-death emergency basis so that the tolerance for availability errors should be very low. The hospital operating the emergency room might therefore conclude that EHR access in the emergency room should not be controlled by strong passwords.

The balancing of risks would be very different for a claims processing system in the same hospital. Claims information may include highly sensitive personal information but is used for financial purposes and so does not need to be available with the same urgency as clinical information. In this setting, there would be a higher tolerance for availability errors, and a strong password policy might be very appropriate.

Unintended consequences of the demise of the HIPAA patient identifier

This same tension has had some interesting consequences for national health information technology policy. The American healthcare sector has been going through a difficult infrastructure transition for a long time, as paper-based medical record and administrative systems are over time converted to EHRs and management information and claims processing systems. These systems in turn are becoming increasingly interoperable, under policy initiatives pursued by both parties and many private organizations to establish health information exchange (HIE) functions and organizations.

These policy initiatives have brought us such noteworthy developments as the Health Insurance Portability and Accountability Act (HIPAA), known to most of us principally for its privacy and security mandates. In fact, however, these were but a small part of the “Administrative Simplification” section, which in turn were but a small part of HIPAA which was intended to promote and standardize electronic healthcare claims transactions. The privacy and security provisions were almost an afterthought, added on to help reassure the public that the new systems would protect their information against improper use or disclosure.

For the last decade or so, the principal public policy initiative has been development of a seamless nationwide network for HIE among interoperable EHRs. This network—probably ultimately a “network of networks” operated by various organizations—will someday, ideally, allow for on-demand delivery of information from the EHR of any healthcare provider who has treated a given patient to the EHR of any other provider where the patient needs care. For example, if I had a medical emergency in New York, my hospital would pull in information from my primary care doctor’s EHR in Seattle; the EHRs of specialists I had seen in San Francisco and Denver, and the EHR of a hospital in Los Angeles where I’d received care. This information could be crucial to my diagnosis and treatment, and my doctors and I would very much want the information to be available. A nationwide HIE, therefore, should have a low tolerance for availability errors.

There are several basic problems with making information available across a network of this kind, but one of the most fundamental is, how does a provider find information? How do you identify me uniquely, so that you can find information about me?

One possible solution was actually provided for in HIPAA, which required regulations establishing a unique patient identifier for use in claims transactions. While not intended to enable HIE, this kind of identifier could play a useful role in the projected nationwide system.

But this potential solution was stopped dead by privacy advocates. Shortly after initial hearings were held on the proposed identifier in 1998, a coalition of privacy advocates objected that this was the first step to a national registry of citizens. There was a public uproar; Congress quickly passed legislation defunding work on the regulation, and no one has been willing to touch it since. Policy concerns about potential privacy violations thus eliminated a potential solution for reduction of HIE availability errors. 

And an unanticipated consequence may well be a system which creates even greater privacy risks. In the absence of a universal identifier, the solution to identifying individuals for HIE is the master patient index (MPI). Names—and even names along with one or two other bits of personal information—aren’t very reliable identifiers across a large population. An MPI serving HIE for a large population therefore needs additional demographic information, which must be updated and maintained. Individuals are then identified by a more-or-less formal comparison of demographic data sets.

An MPI is complex enough within a closed network, and complexity only increases with MPIs serving more than one network. A genuinely functional nationwide network will have to have some sort of MPI interoperability solution, which will require even more storage and sharing of information. But the same demographic data that resolves identities is useful for identity theft, so that these new databases and transactions are likely targets for malicious action—a seriously increased risk of harmful confidentiality errors. This may or may not turn out to be a worthwhile tradeoff against the loss of the HIPAA unique patient identifier

Conclusion

There is no single right privacy choice for all situations. Rather, privacy is one of a number of important values which must be balanced, but might not always be fully reconciled, in pursuing personal, organizational and policy objectives. When that occurs, all that can really be done is a careful analysis of the implications of the difficult choices, and a decision that some kinds of privacy risks might be worth accepting to avoid other, more harmful types of risk.

John R. Christiansen is a Seattle, WA, healthcare lawyer who focuses on information technology, privacy and security issues. His clients include hospitals, physician practices, health IT services vendors and governmental entities. He is the current chair of the American Bar Association’s HITECH Megarule Task Force and secretary of the Washington State Bar Association’s Health Law Section and has served in a number of other health IT leadership positions over the years. John is a frequent speaker and author, and some of his recent publications include reports to the National Governors Association on interstate consent and state law coordination issues for health information exchanges.

-----

Electronic Health Records vs. Patient Privacy: Who Will Win?

Kam_Rick    Pollack_Doug









By Rick Kam, CIPP/US, and Doug Pollack, CIPP/US

Does your dermatologist need access to your reproductive health history?

Can you limit access to the psychiatric notes in your chart once they have been entered into your provider’s new electronic health record (EHR) system?

It sounds absurd, but the adoption of EHRs and health information exchanges could enable this level of access in the future. The goal with these initiatives is to provide access to each American’s medical records in order for physicians to better provide treatment.

With the rapid rollout of EHRs, serious issues in patient privacy rights need to be addressed: lack of trust in the system, human error, lack of patient control over their electronic data and legislative gaps.

A lack of trust

Maintaining patient trust is the cornerstone to a successful healthcare system. The Office of the National Coordinator for Health Information Technology has indicated that a lack of this trust “may affect willingness to disclose necessary health information and could have life-threatening consequences.”

Dr. Deborah Peel, founder of Patient Privacy Rights, agrees. “The lack of privacy causes bad health outcomes. Millions of people every year avoid treatment because they know health data is not private,” she says. She cites several cases where privacy concerns affected the quality of healthcare:

  • The HHS estimated that 586,000 Americans did not seek earlier cancer treatment.
  • HHS estimated that 2,000,000 Americans did not seek treatment for mental illness.
  • Millions of young Americans suffering from sexually transmitted diseases do not seek treatment.
  • The Rand Corporation found that 150,000 soldiers suffering from PTSD do not seek treatment because of privacy concerns.
  • The lack of privacy contributes to the highest rate of suicide among active duty soldiers in 30 years.

At the recent International Summit on the Future of Health Privacy, an attorney in Boston, MA, who suffers from bipolar disorder described how her mental health records were digitized for thousands of doctors and nurses to see—without her permission. “Personal details that took me years to disclose during therapy are being shared throughout my medical network, against my will,” she said. “It’s destroyed my trust with my doctors.”

Human error

41 percent of healthcare organizations surveyed for the 2011 Benchmark Study on Patient Privacy and Data Security said that data breaches involving PHI are caused by sloppy employee mistakes. A single oversight can affect the privacy of hundreds of thousands of people, as happened in Utah in March, when hackers broke into an unprotected server, stealing the personal information of 780,000 people.

"The Utah data breach is an example of human error because, as reported, the server did not have a secure password," Lisa Gallagher, senior director of privacy and security for HIMSS, stated in an eWEEK article. “Human error in healthcare delivery has impactful consequences when it comes to security. Training employees on security measures and implementing the proper security protocols are basic steps to take, but also, are often overlooked."

The problem grows exponentially when you consider how electronic data are sprawled across the healthcare ecosystem. Third-party mistakes, including those of business associates (BAs), account for 46 percent of data breaches reported in the Ponemon study.

A lack of patient control

With the adoption of electronic health records and health information exchanges, we wondered who owns patient data. The patient? The physician? The hospital? The health plan? Logically, the owner would be responsible for the privacy of this data. But legally, it’s unclear who owns the data, and in fact, it becomes more an issue of control.

So what control does the patient or other member of the healthcare ecosystem have when it comes to accessing, modifying and transmitting any medical data? We asked an attorney who specializes in patient privacy to clarify the issue.

“Few federal or state laws talk about ownership of health information,” says Adam H. Greene, a partner with the law firm of Davis Wright Tremaine LLP in Washington, DC. “Rather, we have a confusing tapestry of federal and state laws governing the level of control that patients have over the sharing of their health information.”

At the core of this privacy debate is the assertion that physicians need access to a patient’s records to provide optimal treatment. In his paper “Debate over patient privacy control in electronic health records,” Mark A. Rothstein, chair of law and medicine at the Louis D. Brandeis School of Law at the University of Louisville, notes that “many physicians assert that patients should not be able to control the content of their health records because doing so would fundamentally change medical practice.” This perspective is fundamentally at odds with that of patient privacy advocates.

Legislative gaps

Federal legislation such as HIPAA and the HITECH Act seek to safeguard protected health information (PHI). In addition, according to the National Conference of State Legislatures, 46 states have data breach notification laws. President Barack Obama’s Consumer Privacy Bill of Rights affords some level of privacy rights to patients.

HIPAA and the Consumer Privacy Bill of Rights, however, create an odd legislative gap. In his Health Information Privacy Bill of Rights, James C. Pyles, an attorney specializing in patient privacy rights, notes that the Consumer Privacy Bill of Rights excludes patients to the extent their health information is covered by HIPAA while offering greater privacy rights with respect to health information not covered by HIPAA. He cites a year-long study by ANSI and others that uncovered the “inadequacies” of HIPAA, including the fact that the HIPAA Privacy Rule was not even intended by the Department of Health and Human Services to serve as a “best practices” standard for privacy protection. This means that HIPAA-protected PHI does not benefit from the Consumer Privacy Bill of Rights and is subject to the same privacy pitfalls as before.

What we can do

Patient privacy is a fundamental right that is being challenged as patient records are digitized and access to those records increases exponentially. Our nation can’t afford to keep building out an electronic healthcare system without addressing these issues.

Pyles’ Health Information Privacy Bill of Rights, developed with the American Psychoanalytic Association, seeks to “protect the fundamental right to privacy of all Americans and the health information privacy that is essential for quality health care,” with prescriptions for patient control, security, accountability and other rights.

We support Pyles’ Bill of Rights. We also believe the answer lies in the private sector, specifically a consortium of EHR vendors, software developers and privacy/security professionals. Together, these experts can bring a holistic view of the issue of patient privacy and data control in a way that no governing body can. And we must act now.

Rick Kam, CIPP/US, is president and co-founder of ID Experts and is an expert in privacy and information security with extensive experience leading organizations to address the protection of PHI/PII and remediation of privacy incidents, identity theft and medical identity theft. Kam chaired the “PHI Project,” a research effort to measure financial risk and implications of data breach in healthcare, resulting in the report The Financial Impact of Breached Protected Health Information: A Business Case for Enhanced PHI Security.

Doug Pollack, CIPP/US, is chief strategy officer at ID Experts, responsible for strategy and innovation including data breach prevention analysis and response services. As a veteran in the technology industry, Pollack has over 25 years of experience in computer systems, software, and security concerns focusing on creating successful new products in new emerging markets.