Until scientists discover a vaccine or treatment for COVID-19, our economy and our privacy will be at the mercy of imperfect technology used to manage the pandemic response.
Contact tracing, symptom capture and immunity assessment are essential tools for pandemic response, which can benefit from appropriate technology. However, the effectiveness of these tools is constrained by the privacy concerns inherent in mass surveillance. Lack of trust diminishes voluntary participation. Coerced surveillance can lead to hiding and to the injection of false information.
But it’s not a zero-sum game. The introduction of local community organizations as trusted intermediaries can improve participation, promote trust, and reduce the privacy impact of health and social surveillance.
Balancing Surveillance with Privacy
Privacy technology can complement surveillance technology when it drives adoption through trust borne of transparency and meaningful choice.
This piece is part of the series “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” which explores whether it’s possible to advance interoperability while maintaining privacy. Check out other pieces in the series here.
Early in 2019 the Office of the National Coordinator for Health IT (ONC) and the Centers for Medicare and Medicaid Services (CMS) proposed rules intended to achieve “interoperability” of health information.
Among other things, these proposed rules would put more data in the hands of patients – in most cases, acting through apps or other online platforms or services the patients hire to collect and manage data on their behalf. Apps engaged by patients are not likely covered by federal privacy and security protections under the Health Insurance Portability and Accountability Act (HIPAA) — consequently, some have called on policymakers to extend HIPAA to cover these apps, a step that would require action from Congress.
In this post we point
out why extending HIPAA is not a viable solution and would potentially
undermine the purpose of enhancing patients’ ability to access their data more
seamlessly: to give them agency over
health information, thereby empowering them to use it and share it to meet
their needs.
By ROBERT C. MILLER, JR. and MARIELLE S. GROSS, MD, MBE
This piece is part of the series “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” which explores whether it’s possible to advance interoperability while maintaining privacy. Check out other pieces in the series here.
The problem with porridge
Today, we regularly hear stories of research teams using artificial intelligence to detect and diagnose diseases earlier with more accuracy and speed than a human would have ever dreamed of. Increasingly, we are called to contribute to these efforts by sharing our data with the teams crafting these algorithms, sometimes by healthcare organizations relying on altruistic motivations. A crop of startups have even appeared to let you monetize your data to that end. But given the sensitivity of your health data, you might be skeptical of this—doubly so when you take into account tech’s privacy track record. We have begun to recognize the flaws in our current privacy-protecting paradigm which relies on thin notions of “notice and consent” that inappropriately places the responsibility data stewardship on individuals who remain extremely limited in their ability to exercise meaningful control over their own data.
Emblematic of a broader trend, the “Health Data Goldilocks Dilemma” series calls attention to the tension and necessary tradeoffs between privacy and the goals of our modern healthcare technology systems. Not sharing our data at all would be “too cold,” but sharing freely would be “too hot.” We have been looking for policies “just right” to strike the balance between protecting individuals’ rights and interests while making it easier to learn from data to advance the rights and interests of society at large.
What if there was a way for you to allow others
to learn from your data without compromising your privacy?
To date, a major strategy for striking this balance has involved the practice of sharing and learning from deidentified data—by virtue of the belief that individuals’ only risks from sharing their data are a direct consequence of that data’s ability to identify them. However, artificial intelligence is rendering genuine deidentification obsolete, and we are increasingly recognizing a problematic lack of accountability to individuals whose deidentified data is being used for learning across various academic and commercial settings. In its present form, deidentification is little more than a sleight of hand to make us feel more comfortable about the unrestricted use of our data without truly protecting our interests. More of a wolf in sheep’s clothing, deidentification is not solving the Goldilocks dilemma.
Tech to the rescue!
Fortunately, there are a handful of exciting new technologies that may let us escape the Goldilocks Dilemma entirely by enabling us to gain the benefits of our collective data without giving up our privacy. This sounds too good to be true, so let me explain the three most revolutionary ones: zero knowledge proofs, federated learning, and blockchain technology.
“…the average patient will, in his or her lifetime, generate about 2,750 times more data related to social and environmental influences than to clinical factors”
The McKinsey “2,750 times” statistic is a pretty
good proxy for the amount of your personal health data that is NOT protected by
HIPAA and currently is broadly unprotected from sharing and use by third
parties.
However, there is bipartisan legislation in front of Congress that offers expanded privacy protection for your personal health data. Senators Klobuchar & Murkowski have introduced the “Protecting Personal Health Data Act” (S.1842). The Act would extend protection to much personal health data that is currently not already protected by HIPAA (the Health Insurance Portability and Accountability Act of 1996).
In this essay, we will look in the rear-view mirror to see
how HIPAA has provided substantial protections for personal clinical data — but
with boundaries. We’ll also take a look out the windshield — the Wild West of
unprotected health data.
Then in a separate post, we’ll describe and comment on the
pending “Protect Personal Health Data Act”.
“The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” series will cover a whole host of topics that discuss, clarify, and challenge the notion of sharing data and if it should be kept private or made public. On the one hand, sharing health information is essential for clinical care, powering medical discovery, and enabling health system transformation. On the other hand, the public is expressing greater concerns over the privacy of personal health data. This ‘Goldilocks Dilemma’ has pushed US policymakers towards two seemingly conflicting goals: 1) broader data interoperability and data sharing, and 2) enhanced data privacy and data protection.
But this issue is even more nuanced and is influenced by many moving parts including: Federal & State privacy legislation, health technology legislation, policy & interoperability rules, data usage from AI & machine learning tools, data from clinical research, ethical concerns, compensating individuals for their data, health data business models, & many more.
Fear not, Deven & Vince are here to walk readers through this dilemma and will be providing pieces to help explain what is going on. Most of their discussion & pieces will cover 2 specific affected areas: 1) How are policymakers addressing health data privacy risks, and 2) The impact on business models within the Health Data Goldilocks Dilemma.
We hope you enjoy the series and if you have any pieces to add to it, please email me zoya@thehealthcareblog.com
Zoya Khan is the Editor-in-Chief of THCB & an Associate at SMACK.health