I recently applied for life insurance. The broker, whom I’ve never met, asked about my health history. “So you’ve just had a baby,” he began. I asked him how he knew. “You’re on Twitter.”
In the last couple of years concerns about the privacy of online health information have grown, as health care finally catches up to other sectors in its use of information technology (IT). The Stimulus package will pump $19.2 billion into healthcare IT, especially electronic medical records for doctors.
While technology can make your medical records safer in some ways than they’d be in a paper chart (using encryption, fire walls, audit trails, etc.), the fact is, no system is totally fail-safe. And when screw-ups happen, technology tends to super-size them.
A few advocates say the main fix is for people to have as much choice (or “consent”) as possible about sharing particular tidbits of health info. Not a bad place to start, but relying too much on consent is impractical and burdensome. We also need limits on who can use the info in your record, and for what. The use of these and other widely accepted “fair information practices” will go a long way when it comes to safeguarding the medical record your doctor holds.
But wait. Your virtual health record is so much bigger than that. It’s the iceberg beyond the traditional medical record tip. Part of it is the verbal trail you create on Facebook, on Twitter (evidently!), in an online patient community, via web searches, or on e-mail. But it’s also what you do–what you buy at the grocery store, how fast you drive, maybe even who you talk to on the phone. Right now, most of that information isn’t easily publicly available, and it isn’t linked, but more of it will be. There are plenty of incentives for companies to understand and influence the minutia of your daily life.
The iceberg of health information about you is growing. Recently scientists determined that Agatha Christie suffered from Alzheimer’s just by analyzing the vocabulary in her novels.
As more and more data about each of us is generated, including through tiny sensors that will increasingly be used in clothing and other products, there is more information to glean from it—about our physical health, actions, and even mental health. The MIT Media Lab is working on computer programs that can “read” head movements and facial expressions to understand emotions.
Eventually, the traditional medical record may pale relative to the vast stores of information about your health that can be found in nontraditional ways. So when we think about health privacy we need to recognize that safeguarding the traditional medical record is only the start. The best policy approaches also protect against discrimination and its consequences. So despite the banners and screams of “Socialist State” in my neighborhood in Washington DC (ooops, there’s more personal information!) the Health Reform Bill, if implemented well—is a strong and necessary step toward protecting individuals against an unavoidable erosion of their health privacy.
Lygeia Ricciardi is the founder of Clear Voice Consulting (www.clear-voice.com) and part of the leadership team of Clinovations (www.clinovations.com) She specializes in strategy, policy and implementation of health IT–with a passionate focus on the consumer. And yes, she is on Twitter: @Lygeia
Categories: Uncategorized
Lygeia,
You had a very interesting analogy in your statement of “… I want to make sure that we don’t get so obsessed arguing over particular trees in the policy discussion that we miss the forest altogether.” This is a great statement, but I suppose in my mind I view the opposite: don’t get lost in the forest when looking for a particular tree.
For example, imagine I am walking into the forest of “Healthcare Information Technology” and am looking for the tree that defines what my privacy is. I pass the trees of Electronic Health Records, Electronic Medical Records, Personal Health Records, Interoperability, Open Source Software, Commercial Software, Virus Protection, Firewalls, Patient Safety, Record Access, etc. There are many different trees in many different areas that could provide me with a clue; however, in the midst of examining them to find the right one I have inadvertently climbed a mountain, crossed a river, and have absolutely no idea where I came in… or how to get out.
The realm of electronic records to the public eye is still not well understood. This is possibly one of the largest quirks in the system. The second quirk is what would one classify as “private” health information? You posed a very good point in the social networking manifesto when it comes to status updates about health, etc. Yes, I am guilty as charged at posting status updates dealing with my frustration of care that does not yield desired results but that does not seem to be the apex of the “classified medical information” issue. To me, this would be where the action of consent (of which you mentioned) comes in. I would be highly surprised if the other health status posters went far beyond a basic health status update to saying they found out they contracted an STD. Again, however, this is not a medical file or even a PHR.
Perhaps that is where my thoughts differ from most. I think of a medical file as a complete history over my 24 years of life, physical address, SSN, insurance information, family history, etc., rather than a comment of my bronchitis not clearing up. If the common patient were to sit and think about what information is really found in a file and transpose that to something that could be sent over the internet, perhaps it may become more clear. Many people are so skeptical of the electronic way of life they refuse to shop online, pay bills online, etc., much less have their medical information sent to an online database. Even if there is a “secure” link from a physician’s office to a patient’s PHR, most PHRs are not covered under the HIPAA privacy law. Many patients also do not realize they cannot sue over a HIPAA violation. Many patients also do not realize how many people have more access to their medical file than they do.
While it seems scary to think anyone in a physician’s office could grab my chart off the shelf and take a look inside it, it is even more scary to think about my information being accessible in different files and from anywhere in the country or world. EMRs and EHRs certainly could provide great benefits for a patient (especially if someone from Alabama was hurt in California and his/her medical history needed to be pulled) but it could be disastrous if it falls into the wrong hands. Medical clerks, and even physicians are among the guilty who have committed fraud and identity theft from a patient’s personal information (and most people think it’s bad enough some Average Joe can hack into your file). One patient record can sell for $60, and claims can be filed well into the hundreds of thousands of dollars. Imagine if an entire hospital’s electronic database were compromised. What if the fake patient had information mixed in the real patient’s file that was stolen, and the real patient needed emergency care? What if the real patient died from a complication of the falsified information? Would the thief then be charged for murder? No, probably not.
You mentioned the Stimulus Bill and how it could propose a solution, a tiny beacon of light in that crazy forest we are now lost in, to the problem. It is true that stolen medical information can create a denial of insurance claims and coverage, and denial of employment based on false medical history (drug testing issues, I assume). I guess I still am a little confused of how it may help. It may provide access to insurance coverage to many persons in America, but if the patient file has already been compromised, how will the bill cover them in the case of denial?
Thank you for stimulating my mind,
Elaina -Biomedical Informatics Student
health knowledge
I agree that privacy of any personal data should be encouraged through any and all legislative and legal means, but I also believe that many who trumpet privacy issues for whatever reason overdo the issue substantially.
I write this primarily because for most intents and purposes personal data have always been relatively easily obtainable even without the convenience of fast electronic retrieval enabled by evermore digitization of data and the Internet. Those data are rarely, if ever, accessed for the simple reason that by and large no one cares about having or knowing the data.
Concerns are raised, for example, that a potential employer might become aware of some medical matter for a potential employee. My reaction is: so what?
If an employee at that company first of all makes an effort to obtain the data and second to act on the data in some negative way, i.e. use the data as a basis for excluding the job seeker from employment, why would the job seeker want employment at that company to begin with?
This applies to all potential destinations where private data might leak. Negative consequences either do not exist at all or are irrelevant.
http://www.healthcarelegislation.net
While I believe that anyone who thinks that his/her electronically stored information is 100% private is a bit delusional, I think you’re conflating several different privacy discussions.
1. Lifestreaming Millenials. These kids (and their millenial wannabe elders) post too much stuff about everything – partying too hard is just the tip of the iceberg. They just need to be more circumspect, becasue it will all come back to bite them – in a job interview (in some cases, they’ll never get that interview) or otherwise. We consent to websites’ terms of service, which include (in some cases) descriptions of the “walled communities” within which posts are visible, but there are no enforceable promises to keep anything private.
2. Sensors. Whether it’s the Nike chip in our running shoes that connects us to a community of runners (kinda like #1, because it’s volitional) or the video images culled from cameras in public places and analyzed (why would they bother, for most of us?) by the MIT Media Lab or others (a little creepier, and less volitional), this is all information that we essentially choose to share by engaging with the sensors that are around us (vs. moving to a cabin in the great north woods). Once you put the chip in your shoe, or walk around in public where we all know all sorts of cameras and sensors are picking up some sorts of data about us, we’ve consented to the gathering and sharing of that data.
3. EHRs and PHRs. PHRs are sort of a special-purpose Facebook with very limited numbers of friends. Again, we use these tools because we get some value out of them, and (putting my pointy-headed lawyer’s hat on for a moment) assume the risk of the records becoming more public than intended. EHRs are the tools used by our health care providers, so (in the future) unless we get our care from the local shaman in the great north woods, our health records will be on line, in (or accessible through) the cloud. I always say that since I’m not Britney Spears, I don’t think anyone is going to be very interested in my health records. Making data available through these tools is more helpful (both to me and to the extent it can be aggregated into population-based studies and development of recommended standards of care) than harmful.
My point is that there is a continuum of choice that underlies the discussion here — some data sharing we choose, some we have no say over. The key, from my perspective, is (a) better-informed, more thoughtful personal decisionmaking about sharing of information and (b) the enactment — and, more importantly, the enforcement — of antidiscrimination laws such as GINA.
Thanks very much for the insightful comments and responses!
Regarding Health Reform, my thought was not that it protects privacy per se, but that in providing access to health insurance coverage to a greater proportion of Americans it minimizes the impact of discrimination in one of the areas in which it tends to hit hardest.
I wish I knew “the solution” to the challenge of the erosion of privacy. For most of us it isn’t climbing into your shell to avoid leaving any trail that might come back to haunt you, which is practically impossible anyway (though you should think carefully about those party pictures on Facebook).
I believe that law and policy are part of the answer. I agree with Dave that GINA is one of the best examples of anti-discrimination law. I would like to see more laws in that model—though I realize that enforcement is tricky at best.
In addition, although greater transparency is the problem, it can also be part of the solution. Even now, most companies that handle sensitive health data are motivated to avoid breaches because they don’t want to lose customer trust. That’s a lot scarier to them than the (often remote) possibility of HIPAA enforcement. Transparency can help companies—and individuals—to act more decently than we might otherwise.
I am with Vince that this brave new world has certain Orwellian characteristics. Though I don’t believe there is a simple answer, I want to make sure that we don’t get so obsessed arguing over particular trees in the policy discussion that we miss the forest altogether.
Having life insurance is now priority for now, but we must accept that the payments are not astronomical as it is implementing a system where the insurance point of living is for everyone, but with an appropriate cost, according to this system will be implemented soon for all citizens still uninsured.
Rape Victims Choice – Risk AIDS or Insurance in the Future
One of the most difficult privacy cases often happens to women who have been raped. If they take the recommended prophylactic AIDS drugs to prevent HIV they will essentially make themselves uninsurable in the future.
http://huffpostfund.org/stories/2009/10/rape-victims-choice-risk-aids-or-health-insurance
Lygeia,
Very insightful: “the traditional medical record may pale relative to the vast stores of information about your health that can be found in nontraditional ways.”
So, now what?
Does the “healthcare privacy/security” issue need to be simply redefined in a much broader context of “personal” privacy security?
Do more traditional concerns about healthcare security/privacy become moot?
What can Joe the Plumber do after he recognizes the problems you point out? should we all become hermits?
Do Google et. al. make George Orwell’s 1984 look passe?
Thanks for pointing out and opening the real can of worms 🙂
I find it so fascinating that Americans seem so fascinated by this privacy issue yet are willing to allow much great extensions & monitoring of their privacy for ‘national security’ purposes including the high likelihood that most of their electronic communications (email, cell phone, bank transactions) are passively monitored already to some degree.
Hi Lygeia
I’d not thought about this aspect of healthcare privacy but now you’ve pointed it out I’m pondering the implications for those people who religiously update their Facebook with news of late nights and crushing hangovers!
One day they may wish they’d kept digitally quiet about such matters; maybe I’ll wish I’d ‘de-friended’ them to avoid being found guilty by association.
So many people worried what other people write about them but the real problem could be what they’re writing about themselves…
Thank you for a fascinating post.
First off, Congratulations!
Second, I’m curious which parts of the bill you think will help ensure privacy, just because of non-discrimination for pre-existing conditions, or is there more than that?
Third, you bring up points about other areas, including employment. I frequently see advice saying not to share “controversial” thoughts online (presumably this would include the words I’m typing) for fear that employers could use it to actively discriminate against job-seekers. The assumption seems to be that employers are actively discriminating based on political affiliation, religion, etc. using social media. The laws exist, but currently there’s no realistic way to enforce them. Do you have any suggestions on what a solution to enforcement might look like?
Finally, Great post! I’ve often looked at the problem of health data from the access perspective rather than the use perspective (see my post last Sunday). There are insights that could be gleaned if all clinical data were on a standardized system. It’s a dream at present, but focusing on use could hold the key to more sharing data on a trusted platform, available to use for the right (non-discriminatory) reasons.
Biometric devices also contribute to medication data that is for sale. More details and additional links here with entities not covered by HIPAA.
http://ducknetweb.blogspot.com/2010/03/colbert-report-takes-on-vitality.html
There is a second “privacy” concern around the use of our medical data that is often over looked – to influence your providers behavior
Many people don’t realize that drug benefit companies harvest 95% of all prescriptions written and then combine that with the AMA data base to target market to providers without breaking any privacy laws. Drug reps often know more about your providers prescribing practices.
There are also now EHR’s like Practice Fusion that deliver ads directly to your providers EHR (you get a “free” EHR in exchange) and other firms that use the check in process to capture patients information and deliver ads directly to patients in the waiting room. None of this is based on standard of care but simply on who purchases the ads.
Just as an aside some firms are also now using your RX history as a surrogate for your medical risk when applying for a mortgage or loan. IE if you have a certain high risk or chronic conditions you may be required to pay higher interest rates on loans.
There’s also the MIB, the Medical Insurance Bureau that has been around for years that has an exchange of files from insurance carriers sharing information.
http://ducknetweb.blogspot.com/2008/08/what-is-mib-medical-insurance-bureau.html
Very well stated! There is some of this type of policy in existence now (e.g. in GINA). But, I’m don’t know how the Health Reform Bill provides any of these types of protections. Any pointers?