Privacy – The Health Care Blog https://thehealthcareblog.com Everything you always wanted to know about the Health Care system. But were afraid to ask. Thu, 01 Dec 2022 20:53:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.4 THCB Gang Episode 100, Thursday August 4 https://thehealthcareblog.com/blog/2022/08/04/thcb-gang-episode-100-thursday-august-4-1pm-pt-4pm-pt/ Thu, 04 Aug 2022 18:11:41 +0000 https://thehealthcareblog.com/?p=102774 Continue reading...]]>

Joining Matthew Holt (@boltyboy) for the 100th #THCBGang on Thursday August 4 were Suntra Modern Recovery CEO JL Neptune (@JeanLucNeptune); Consumer advocate & CEO of AdaRose, Lygeia Ricciardi (@Lygeia);  and the Light Collective’s Andrea Downing (@bravebosom). Sadly fierce patient activist Casey Quinlan (@MightyCasey) had a Mets party flare up and couldn’t join at the last minute. There was a lot of chat about data and privacy, and even some ideas about what a future where patients data flowed but patients rights were respected might look like!

You can see the video below & if you’d rather listen than watch, the audio is preserved as a weekly podcast available on our iTunes & Spotify channels.

]]>
The Sovereignty Network will help patients make money out of your health data https://thehealthcareblog.com/blog/2022/05/16/the-sovereignty-network-will-help-patients-make-money-out-of-your-health-data/ https://thehealthcareblog.com/blog/2022/05/16/the-sovereignty-network-will-help-patients-make-money-out-of-your-health-data/#comments Mon, 16 May 2022 22:00:00 +0000 https://thehealthcareblog.com/?p=102409 Continue reading...]]>

By HAMISH MACDONALD 

Being a patient has always meant being at the bottom of a trickle-down pyramid in healthcare. Late to get information, our test results, our data, and as for earning the money that our healthcare data is worth – that is something the healthcare industry does without our permission or dues. We are left right out of that.

But what if we made clinical data tools available on your device, so that you could build the most valuable set of healthcare data that exists about you anywhere? What if you owned that particular data set as your personal asset? Well, we think that researchers are going to want access to it – and pay you for that access.

Not only that, how valuable would it be to have the most complete and accurate healthcare data set available about you under your ownership and control? How can you expect a doctor or yourself to reach the best conclusion with incomplete information about your health?!? Frustration, confusion, anxiety and poor health outcomes are often the result.

How The Sovereignty Network empowers you to build your own healthcare data set

Building the most valuable data set about you, for you, is what we have done at The Sovereignty Network. We elevate you as a patient to be a Data Owner. There are 4 easy steps to becoming a Data Owner and earning what your data is worth – and having a complete and accurate set of your healthcare data on hand for your peace of mind.

  1. We have clinically coded simple to answer FHIR and SNOMED CT questionnaires that cover the entire spectrum of your health. We call it “DCPLEG”. By filling out questionnaires in your personally owned and secure profile that represent your Demographic, Clinical, Psychosocial, Lifestyle, Environmental and Genomic data you paint a complete 360 degree view of your health.
  • Where Clinical data sets are also available, such as in the US via the newly implemented Patient API rule, you can also add your clinical data from your healthcare providers. The spread of the FHIR data interoperability standard around the world makes this increasingly feasible to accomplish.
  • Data Researchers are able to sit at their desktop and specify the precise criteria that they are looking for (anonymized, of course) using the same clinical codes that you and others have already filled out in your health profile above. E.g. Age, sex, condition(s), medication(s), procedures, deeper demographic information, environmental, lifestyle, psychosocial markers, etc. Through partners even individual base pairs within a whole genome can be specified. Through the Sovereignty Network they can then make you an offer that you can’t refuse, as it were. If you agree to the offer, only then can they make contact with you with their survey they invite you to complete.
  • We have invented a new class of work we call the “Data Coach” that works rather like the synaptic fluid between joints but here between the Data Researcher and you as the Data Owner. A Data Coach is a vetted healthcare professional / healthcare data expert who verifies on your behalf the specific criteria needed by the Data Researcher. If a Researcher is willing to pay you, say, $100 to fill out a 20-minute survey because you fit their desired set of criteria, you are probably willing to pay some fraction of that to qualified Data Coaches to verify the criteria. (And once verified, criteria likely don’t need to be verified for another Data Researcher).

Turning your data set into licensable income – or donating it to causes you support

Because you and only you own the copy of your healthcare data that you have built – your record, or specific parts of it, is now licensable.

Perhaps more licensable than we can imagine today, as until now so much potential healthcare research or Real World Evidence simply doesn’t take place because it is considered too difficult, time-consuming and expensive to directly perform or else hire research organizations to carry out research. This is something really exciting to consider, for you individually but also for the world of research.

You may also choose to donate parts of your data set to causes that you support, such as cancer research, rare disease or more general population health research. Because it is your data set it is entirely up to you how and what you use it for.

How researchers benefit

Healthcare researchers around the world, from institutions to Pharma to government agencies and academia are all looking for better insights and access to relevant, structured data. The Sovereignty Network can assist in at least three ways…

1) sending out field surveys and questionnaires to precisely the right type of person with the precise research criteria that they are looking for.

2) The research community has perennial issues undertaking correlational research because it is difficult and time consuming to relate a cohort to multiple variables. Now any researcher can do this from their desktop. Even without a complex medical record, members may find themselves receiving offers to be part of correlational research. This includes people before they get sick from a chronic condition due to lifestyle or environmental factors.

3) Similarly, cause and effect studies are very difficult and expensive to execute. Pharma companies, policy makers, public health, academia etc will be very well served by the TSN platform to create layers of cohorts to test the causality of variables to health outcomes much more easily. Typically, this requires large cohort sizes and a long follow up period (years to decades). Think of the Framingham Heart Study, or the US Veterans Research Studies.

How The Sovereignty Network gets paid

How do we get paid? We earn a 20% transaction fee on the value we help you generate as a Data Owner. You keep 80%. We think that is fair, aligned and scalable. It is also transparent. Your data is not sold or marketed; no ads are sold. Our success is directly tied to your success. We like that.

However, as the network grows our goal is to transition to be a member-owned and governed organization. A decentralized global network. At that point the members themselves own The Sovereignty Network, proportionate to the value each member has contributed. In this way, the entire data set can continue on indefinitely, governed by its own charter without being corrupted by large amounts of money held by only a few hands. The Sovereignty Network needs to become globally owned and controlled. If you will excuse the Abraham Lincoln reference, it needs to be “of the people, by the people, for the people”.

Kicking this off with a Kickstarter

Aiming for this to become a global community and movement for everyone to be able to get, build and own their healthcare data set, I am pleased to announce here exclusively at The Health Care Blog for all readers and friends of readers that we are rolling all of this experience into a global Kickstarter campaign starting today.

Why a Kickstarter Campaign? This only works with people using it. Seeding a marketplace so that supply and demand adequately match for quality transactions is vital. (Otherwise known as liquidity and transparency). 70% of funds raised goes back to each person who pledges through The Sovereignty Network selecting and providing you with your own personal Data Coach to help advise and assist you in building up an accurate and complete data set, wherever you may be. The remaining 30% goes to ensuring we provide the best possible experience for you as the marketplace starts up.

A marketplace designed for patients and those that serve them.

The platform already works. We have already onboarded top patient advocates, data standards experts, and researchers to test the system. What is needed now is your support to turn this into a global Movement, and a great first experience for the initial founding members. Everyone who supports the Kickstarter campaign becomes a Founding Member.

A CODA for those interested in Data Ownership

The perennial Custodial Data constraint in healthcare (i.e., they know it is not really their data at all – it is just that possession is 9/10 of the law) means that you are the only person who can legally and morally allow all the different components that make up a curated, accurate, identifiable and valuable set of health data about you to be shared without encumbrance.

But not everyone in healthcare agrees that “owning data” by patients is the way forward. We beg to differ. Healthcare providers can still “own” their set of data about you, but you need to own your copy too. This becomes clear when you consider something known little about: how privacy rights got divorced from property rights originally when privacy became its own separate branch of law over 130 years ago. I wrote about this recently https://sovereignty.network/blog/enjoining-privacy-rights-and-property-rights-can-assist-patient-data-rights

As I write in the article, a hugely influential law paper back in 1890 entitled “The Right to Privacy” by Warren and Brandeis deliberately placed what they proposed as a universal right under laws concerning the right to Life, rather than under the traditional branch of law concerning privacy, that of Property rights.  

This may not have mattered, except that in the 1950’s the powerful asset that today is data was birthed into the world through the rise of computers. Facebook and the rest of the tech barons enjoy immense power over “private” data simply by owning and controlling the servers that house data – including data that we create from our bodies and from our own thoughts and actions. Lumps of metal and silicon they may be, but with the transformational magic of software applied to the data that course through those servers data can be transformed into knowledge and insights to create huge economic value. Servers are lumps of property that are extremely valuable beyond the sum of their parts due to long-standing property rights.

The bottom line is that in an era where data is the most valuable asset on the planet we need to stake out a data set of our own. If an individual has both clearly defined ownership and control rights on top of building the most valuable healthcare data set available about them anywhere, they can then license or otherwise utilize that most valuable copy as they see fit – including using it to ensure more appropriate healthcare provision for themselves and their family. 

Hamish MacDonald is CEO of The Sovereignty Network

]]>
https://thehealthcareblog.com/blog/2022/05/16/the-sovereignty-network-will-help-patients-make-money-out-of-your-health-data/feed/ 2
The “Secret Sauce” – A Comparison of TSMC and Pfizer https://thehealthcareblog.com/blog/2021/10/22/the-secret-sauce-a-comparison-of-tsmc-and-pfizer/ Fri, 22 Oct 2021 16:59:35 +0000 https://thehealthcareblog.com/?p=101212 Continue reading...]]>

By MIKE MAGEE

This week’s Tom Friedman Opinion piece in the New York Times contained a title impossible to ignore: “China’s Bullying Is Becoming a Danger To The World and Itself.” The editorial has much to recommend it. But the item that caught my eye was Friedman’s full-throated endorsement of Taiwan’s “most sophisticated microchip manufacturer in the world,” Taiwan Semiconductor Manufacturing Company (TSMC).

TSMC owns 50% of the world’s microchip manufacturing market, and along with South Korea’s Samsung, is one of only two companies currently producing the ultra-small 5-nanometer chips. Next year, TSMC will take sole ownership of the lead with a 3-nanometer chip. In this field, the smaller the better. (For comparison, most of China’s output is 14 to 28 nanometers.)

U.S. Silicon Valley companies like Apple, Qualcomm, Nvidia, AMD, and recently Intel contract with TSMC rather than produce chips on their own. In addition, the key machines and chemicals necessary to produce the chips are willing supplied to TSMC by U.S. and European manufacturers. TSMC’s secret sauce, according to Friedman, is “trust.” As he writes, “Over the years, TSMC has built an amazing ecosystem of trusted partners that share their intellectual property with TSMC to build their proprietary chips.”

“Trust me” is not a phrase often associated with intellectual property. Consider, for example, Washington Post’s reporting the very same day as Friedman’s under the banner, “In secret vaccine contracts with governments, Pfizer took hard-line in the push for profit, report says.” The article reveals documents in a Public Citizen report that confirms that Pfizer has been maximizing their vaccine profits “behind a veil of strict secrecy, allowing for little public scrutiny… even as demand surges…”

As I describe in my book “Code Blue: Inside the Medical Industrial Complex” (Grove 2020), Pfizer’s focus on intellectual property as a commercial weapon has a history that extends back a half-century.

In the 1980’s Pfizer CEO, Ed Pratt was ideally positioned to lead the global charge on intellectual property (IP) protections. Pratt was chairman of the powerful US Business Roundtable and also the formal adviser to Reagan’s US trade representative, Bill Brock. Pratt’s first move was to form a task force on intellectual property with his chief ally, IBM CEO John Opel. Their recommendation to Brock that a position is created within the Office of the US Trade Representative for a director of international investment and intellectual property sailed through.

The challenge remained in linking intellectual property protections to ongoing multilateral-trade negotiations that currently involved 123 nations. This was a leap because trade agreements normally helped prevent monopolies, while intellectual property protections were viewed by many nations as supporting monopolistic companies. Rather than fight the battle head-on, Pratt and his followers finessed the whole discussion by advocating for the creation of a collection of regulatory policies that prohibit product piracy.

In 1983, Pratt and Opel approached the leaders of 10 other large US-based multinationals, including General Electric, General Motors, DuPont, Johnson & Johnson, and Monsanto, requesting their participation on the Intellectual Property Committee and creating a united front across industries.

At Bill Brock’s request, Pratt worked tirelessly to build a multi-sector global coalition of major corporations to engage the United Nations and World Trade Organization. Domestically, he worked the chambers of commerce, business councils, business committees, and trade associations. As one analyst recounted, “With every such enrollment, the business power behind the case for such an approach became harder and harder for governments to resist.”

During Reagan’s first term as president, the term “piracy” became popularized and connected to American ideas that were being stolen by greedy foreign nations, denying companies like Pfizer and IBM their “rightful rewards.” The messaging was reinforced by generous underwriting of well-funded think tanks across the political spectrum, from the American Enterprise Institute to the Brookings Institution. Pfizer supported a comprehensive public affairs strategy with press releases, speeches, white papers, conferences, op-eds, and special briefings designed to strengthen the connection between free trade and intellectual property.

It took more than a decade to accomplish the goal, but when the eighth round of the General Agreement on Tariff and Trade was signed in 1994, it had 123 signatories and established the World Trade Organization with intellectual property protections for multinational corporations. During the years that the battle was engaged, Pfizer developed resources in government relations, investor relations, media relations, public affairs, and shareholder relations that have continued to facilitate maximizing profitability, including now from their current Covid vaccine in the middle of a worldwide pandemic.

Mike Magee, MD is a Medical Historian and Health Economist, and author of “Code Blue: Inside the Medical Industrial Complex.“

]]>
How Traditional Health Records Bolster Structural Racism https://thehealthcareblog.com/blog/2020/06/24/how-traditional-health-records-bolster-structural-racism/ Wed, 24 Jun 2020 14:15:24 +0000 https://thehealthcareblog.com/?p=98710 Continue reading...]]>

By ADRIAN GROPPER, MD

As the U.S. reckons with centuries of structural racism, an important step toward making health care more equitable will require transferring control of health records to patients and patient groups.

The Black Lives Matter movement calls upon us to review racism in all aspects of social policy, from law enforcement to health. Statistics show that Black Americans are at higher risk of dying from COVID-19. The reasons for these disparities are not entirely clear. Every obstacle to data collection makes it that much harder to find a rational solution, thereby increasing the death toll.

In the case of medical research and health records, we need reform that strips control away from hospital chains and corporations. As long as hospital chains and corporations control health records, these entities may put up barriers to hide unethical behavior or injustice. Transferring power and control into the hands of patients and patient groups would enable outside auditing of health practices; a necessary step to uncover whether these databases are fostering structural racism and other kinds of harm. This is the only way to enable transparency, audits, accountability, and ultimately justice.

A recent review in STAT indicates that Black Americans suffer three to six times as much morbidity due to COVID-19. These ratios are staggering, and the search for explanations has not yielded satisfying answers.

“The Sutter and MIT studies cast doubt on whether individual risk factors are as important as social determinants of health in affecting someone’s chances of contracting severe and even fatal COVID-19,” the article explains. “To reduce the U.S. death toll now that many states are seeing a new surge in cases, [Philip Alberti, senior director for health equity research at the AAMC] said, ‘our response to this disease must look beyond the strictly medical.”

Social determinants have a large impact on hospitalization rates. To study and correct structural racism we will need to link a person’s social determinants to their medical records and compare white vs. Black, state by state and county by county. This raises two related problems: privacy and analysis.

There is no more private information than one’s personal records. It’s unlikely that anyone, Black or white, will trust this combination to government or corporate institutions. That implies that the combined information will need to be under the control of the individual themselves. But analysis, almost by definition, requires access to many different people’s records as well as sophisticated methods consistently applied by experts.

Contact tracing in a pandemic illustrates these two related problems. The privacy component is a person’s detailed record of social contacts and medical symptoms. The analysis component is the trained public health agent that seeks to access a person’s record and link it to the contact records of other people while also sending valuable epidemiologic data upstream to scientists and politicians. Trust in the process is essential. A lack of trust in the public health agent leads to a lack of individual cooperation, delayed science, and political manipulation. Trust is enhanced when the records are strictly controlled by the individual, using open source and easily peer-reviewed technology. Trust is also enhanced when analysis methods are open source, consistent across counties, states, and nations, and supervised by trusted community representatives.

Today’s institutional health records obscure systemic racism because they are not trusted to include social determinants of health and they are not transparent or consistent in how they are analyzed. These health records are not controlled by the individual patient. Under HIPAA, they can be used and shared by the institution without patient consent or even after-the-fact transparency. The use of our records is routinely manipulated to increase revenue through enhanced billing as well as through sharing with for-profit companies that use the records to create trade-secret medical analytics that, in turn, add cost to future patients. Current trends around individual control of health records are not encouraging. In response to the opioid crisis, privacy protections over sensitive behavioral health information are being reduced, making the voluntary contribution of accurate social determinants data even less likely in the future.

The analysis of health records is also impeded by institutional control. The hospitals that hold our health records compete among each other for subsidies and with privately financed startups for patients. Objective quality data and price transparency required to compare value is almost non-existent after decades of “managed care” and “value-based payment” promises. For their part, our politicians and policymakers routinely manipulate access to analytics in order to obscure disparities in access to the social determinants of health such as housing, education, and health insurance. As reported by STAT, an analysis by scientists at the Harvard T.H. Chan School of Public Health notes that less than half of U.S. states disaggregate COVID-19 cases or deaths by race/ethnicity:

“For example, data from the COVID-19 tracking project [5] suggests that only ~21 states currently report COVID-19 cases or deaths disaggregated by race/ethnicity, and among those that do, substantial proportions (typically ≥50%) of cases and deaths are of unknown or missing race/ethnicity.”

The alternative to institutional control of health records is universal health records that are controlled by the individual under policies suggested and supported by an individual’s chosen community.

Universal health records would be equally accessible to physicians, public health agents, medical and social scientists. The artificial barriers to access and analysis posed by institutions and political jurisdictions would be reduced. The quality of our medical science and public policy would still depend on some privileged elites, but at least we, as patients and citizens, would gain the advice and support of our own communities.

Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country.

This post originally appeared on Bill of Health here.

]]>
Why Should Anyone Care About Health Data Interoperability? https://thehealthcareblog.com/blog/2019/09/19/why-should-anyone-care-about-health-data-interoperability/ https://thehealthcareblog.com/blog/2019/09/19/why-should-anyone-care-about-health-data-interoperability/#comments Thu, 19 Sep 2019 15:51:20 +0000 https://thehealthcareblog.com/?p=96794 Continue reading...]]>

By SUSANNAH FOX

This piece is part of the series “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” which explores whether it’s possible to advance interoperability while maintaining privacy. Check out other pieces in the series here.

A question I hear quite often, sometimes whispered, is: Why should anyone care about health data interoperability? It sounds pretty technical and boring.

If I’m talking with a “civilian” (in my world, someone not obsessed with health care and technology) I point out that interoperable health data can help people care for themselves and their families by streamlining simple things (like tracking medication lists and vaccination records) and more complicated things (like pulling all your records into one place when seeking a second opinion or coordinating care for a chronic condition). Open, interoperable data also helps people make better pocketbook decisions when they can comparison-shop for health plans, care centers, and drugs.

Sometimes business leaders push back on the health data rights movement, asking, sometimes aggressively: Who really wants their data? And what would they do with it if they got it? Nobody they know, including their current customers, is clamoring for interoperable health data.

Forgive me if I smile, out of pure nostalgia. These leaders are taking me back to the 1990s when I was building data-driven websites and myopic executives were deriding the Web as a zero-billion dollar business.

Other leaders, though, had a vision of what was possible, even on dial-up. They did not denigrate the clunkiness of the current tools or point out that nobody was asking for the service they were creating.

For example, Amazon started selling books online in 1995 when only around 14% of U.S. adults had access to the internet. Jeff Bezos created a platform business that leverages data to deliver products. Our opportunity is to create platform businesses that leverage data to deliver health. Don’t let the failure of your imagination limit your ability to serve your customers.

Here’s another way to think about our current situation:

My friend Hugo Campos is originally from Brazil and taught me a lovely phrase in Portuguese about someone who holds all the cards, who seems to have all that they need to create change: “Está com a faca e o queijo na mão.” It means: He holds the cheese and the knife. This person has what they need to execute their vision. You want to be that person.

I came up with this illustration for how this pertains to health data:

Right now you might be in the lower right quadrant. Let’s call that the Data Pantry. You have lots of data but you’re not yet sharing it, nor are you leveraging it well. You just have the cheese. No knife.

Some of you might be entrepreneurs or innovators (and of course this includes patients and caregivers) who can’t wait to get your hands on these data flows. You have great ideas and maybe even a prototype of a tool that will make a difference in people’s lives, if only you could partner with someone who has data. You just have a knife. No cheese.

Organizations who are putting it all together, who are sharing data, partnering to bring in more data, partnering with patients and entrepreneurs who have ideas – they are the Data Elite. They have the cheese and the knife.

But what about the lower left quadrant? They don’t have a lot of aggregate data and they don’t know why they should care. Guess what? That’s the biggest group of all and we love them. They are the customers. To extend the cheese metaphor, they will consume the sandwiches we make and ask where we’ve been all their lives. They will start managing their diabetes better, they will get their kids’ immunization records squared away faster for school and for summer camp, they will be able to share their mom’s health record with a new specialist to get a second opinion.

The elephant in the room is that most people don’t want to engage in their health, much less with their health data. Highly motivated patients and caregivers are the tip of the spear, the pioneers who will push for access and help create the tools that the rest of the population will gratefully use if they ever need them.

Take Hugo for example. He lives with a heart condition that requires him to have an implantable cardioverter defibrillator (ICD). He knows that the data generated by the device could help him manage his condition, but he doesn’t have access to it. Medtronic is hoarding the cheese. But Hugo has been able to jailbreak his device, get access, and show his doctor that, for example, Scotch whisky makes his heart flutter, so he’s had to cut it out of his life sadly. Here’s how this pertains to the broader health data conversation: Medtronic’s hoarding of data hurts not only Hugo, it hurts his family, his employer, AND his health insurance company, who want to keep Hugo well.

We should all be working toward freeing the data and letting people decide whether to engage with it, building the infrastructure and tools that allows someone to wake up one day (maybe because of a life-changing diagnosis) and say, “Yes, I’m ready. Now, how do I get my data?”

You want to be there for them in their time of need. That’s our opportunity and that’s our mission. And interoperable data, while it sounds very technical, is actually very human.

To learn more about Hugo and his fight for data access, please see:

If you are a patient/citizen/consumer working on applications of health data of any kind, check out this opportunity for an all-expenses paid trip to the FHIR “Dev Days” event in Amsterdam.

Susannah Fox, former CTO of the US Department of HHS, helps people navigate health and technology, providing strategic advice related to research, health data, technology, and innovation. This post originally appeared on her blog here.

]]>
https://thehealthcareblog.com/blog/2019/09/19/why-should-anyone-care-about-health-data-interoperability/feed/ 8
Thinking ‘oat’ of the box: Technology to resolve the ‘Goldilocks Data Dilemma’ https://thehealthcareblog.com/blog/2019/09/09/thinking-oat-of-the-box-technology-to-resolve-the-goldilocks-data-dilemma/ https://thehealthcareblog.com/blog/2019/09/09/thinking-oat-of-the-box-technology-to-resolve-the-goldilocks-data-dilemma/#comments Mon, 09 Sep 2019 16:16:48 +0000 https://thehealthcareblog.com/?p=96761 Continue reading...]]>
Marielle Gross
Robert Miller

By ROBERT C. MILLER, JR. and MARIELLE S. GROSS, MD, MBE

This piece is part of the series “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?” which explores whether it’s possible to advance interoperability while maintaining privacy. Check out other pieces in the series here.

The problem with porridge

Today, we regularly hear stories of research teams using artificial intelligence to detect and diagnose diseases earlier with more accuracy and speed than a human would have ever dreamed of. Increasingly, we are called to contribute to these efforts by sharing our data with the teams crafting these algorithms, sometimes by healthcare organizations relying on altruistic motivations. A crop of startups have even appeared to let you monetize your data to that end. But given the sensitivity of your health data, you might be skeptical of this—doubly so when you take into account tech’s privacy track record. We have begun to recognize the flaws in our current privacy-protecting paradigm which relies on thin notions of “notice and consent” that inappropriately places the responsibility data stewardship on individuals who remain extremely limited in their ability to exercise meaningful control over their own data.

Emblematic of a broader trend, the “Health Data Goldilocks Dilemma” series calls attention to the tension and necessary tradeoffs between privacy and the goals of our modern healthcare technology systems. Not sharing our data at all would be “too cold,” but sharing freely would be “too hot.” We have been looking for policies “just right” to strike the balance between protecting individuals’ rights and interests while making it easier to learn from data to advance the rights and interests of society at large. 

What if there was a way for you to allow others to learn from your data without compromising your privacy?

To date, a major strategy for striking this balance has involved the practice of sharing and learning from deidentified data—by virtue of the belief that individuals’ only risks from sharing their data are a direct consequence of that data’s ability to identify them. However, artificial intelligence is rendering genuine deidentification obsolete, and we are increasingly recognizing a problematic lack of accountability to individuals whose deidentified data is being used for learning across various academic and commercial settings. In its present form, deidentification is little more than a sleight of hand to make us feel more comfortable about the unrestricted use of our data without truly protecting our interests. More of a wolf in sheep’s clothing, deidentification is not solving the Goldilocks dilemma.

Tech to the rescue!

Fortunately, there are a handful of exciting new technologies that may let us escape the Goldilocks Dilemma entirely by enabling us to gain the benefits of our collective data without giving up our privacy. This sounds too good to be true, so let me explain the three most revolutionary ones: zero knowledge proofs, federated learning, and blockchain technology.

  1. Zero Knowledge Proofs (ZKP)

Zero knowledge proofs use cutting edge mathematics to allow one party (the “prover”) to prove the validity of a statement to another party (the “verifier”) without disclosing the underlying data about their statement. Put another way, zero knowledge proofs let us prove things about our data without giving up our privacy. This could be an extremely valuable strategy in research since we could learn, for example, which treatments worked best for which people without needing to know which people received which treatments or what their individual outcomes were. Zero knowledge proofs are already being used in healthcare today—pharmaceutical manufacturers in the MediLedger project are deploying them to keep our drug supply chains both private and secure. 

  • Federated Learning

Another privacy enabling innovation is federated learning, which enables a network of computers to collaboratively train one algorithm while keeping their data on their devices. Instead of sending their data to a central computer to train an algorithm, federated learning sends the algorithm to the data, trains it on data locally, and only shares the updated algorithm with other parties. By decoupling the training of algorithms from the need to centralize data, federated learning limits the exposure of an individual’s data to privacy risks. With federated learning, several of the world’s largest drug makers, usually fierce competitors, are collaborating in the MELLODDY project to advance drug discovery. Federated learning lets these companies collectively train a single shared algorithm on their highly proprietary data without compromising their privacy to their competitors. Collectively these companies benefit as they are effectively creating the world’s largest distributed database of molecular data, which they hope to use to find new cures and treatments, a process that promises to benefit us all.

  • Blockchain

Blockchain technology also has a critical role to play in creating a secure network for data sharing. The much hyped “blockchain” stems from its first implementation in Bitcoin but has much more broad applicability. Blockchains combine cryptography and game theory such that a network of computers reach consensus on a single state, you can think of them as analogous to a network of computers joining together to create one giant virtual computer. This virtual computer maintains a shared ledger of “the truth,” a sort of database the contents of which are continuously verified by all the computers in the network, and runs autonomous programs called “smart contracts.” These aspects of blockchains provides uniquely strong assurances of trust in data security and use; they execute the rules of the network consistently and objectively, and the whole process is transparent and universally auditable on the shared ledger. When applied to health data these properties could empower individuals with an unprecedented ability to supervise and control the use of their own data, and a thriving market of startups have emerged for exactly this use case.

The way forward

The cumulative significance of these paradigm-shifting technologies is their potential to eliminate the Goldilocks Dilemma between privacy and learning, individuals and the collective, once and for all. Their emergence forces us to rethink not only our national health IT policy, but our underlying ethical and legal frameworks as well. By creating the potential to build a future in which our treatment of data simultaneously respects individual and collective rights and interests, we believe there is an obligation to further develop and scale the core privacy-protecting functions of these technologies. Our aim is to spread awareness of the possibility of resolving a fundamental 21st century ethical dilemma with a technological solution. In this case, “can” implies “ought”– we must advocate for and demand that these and similar innovations be embedded into the future of our data and our health.

Robert Miller is building privacy solutions at ConsenSys Health and manages a blockchain and healthcare newsletter at https://bert.substack.com/.

Marielle S. Gross, MD, MBE is an OB/GYN and fellow at the Johns Hopkins Berman Institute of Bioethics where her work focuses on application of technology and elimination of bias as means of promoting evidence-basis, equity and efficiency in women’s healthcare (@GYNOBioethicist). 

]]>
https://thehealthcareblog.com/blog/2019/09/09/thinking-oat-of-the-box-technology-to-resolve-the-goldilocks-data-dilemma/feed/ 3
Barbarians at the Gate https://thehealthcareblog.com/blog/2019/09/05/barbarians-at-the-gate/ https://thehealthcareblog.com/blog/2019/09/05/barbarians-at-the-gate/#comments Thu, 05 Sep 2019 12:58:01 +0000 https://thehealthcareblog.com/?p=96751 Continue reading...]]>

By ADRIAN GROPPER, MD

US healthcare is exceptional among rich economies. Exceptional in cost. Exceptional in disparities. Exceptional in the political power hospitals and other incumbents have amassed over decades of runaway healthcare exceptionalism. 

The latest front in healthcare exceptionalism is over who profits from patient records. Parallel articles in the NYTimes and THCB frame the issue as “barbarians at the gate” when the real issue is an obsolete health IT infrastructure and how ill-suited it is for the coming age of BigData and machine learning. Just check out the breathless announcement of “frictionless exchange” by Microsoft, AWS, Google, IBM, Salesforce and Oracle. Facebook already offers frictionless exchange. Frictionless exchange has come to mean that one data broker, like Facebook, adds value by aggregating personal data from many sources and then uses machine learning to find a customer, like Cambridge Analytica, that will use the predictive model to manipulate your behavior. How will the six data brokers in the announcement be different from Facebook?

The NYTimes article and the THCB post imply that we will know the barbarians when we see them and then rush to talk about the solutions. Aside from calls for new laws in Washington (weaken behavioral health privacy protections, preempt state privacy laws, reduce surprise medical bills, allow a national patient ID, treat data brokers as HIPAA covered entities, and maybe more) our leaders have to work with regulations (OCR, information blocking, etc…), standards (FHIR, OAuth, UMA), and best practices (Argonaut, SMART, CARIN Alliance, Patient Privacy Rights, etc…). I’m not going to discuss new laws in this post and will focus on practices under existing law.

Patient-directed access to health data is the future. This was made clear at the recent ONC Interoperability Forum as opened by Don Rucker and closed with a panel about the future. CARIN Alliance and Patient Privacy Rights are working to define patient-directed access in what might or might not be different ways. CARIN and PPR have no obvious differences when it comes to the data models and semantics associated with a patient-directed interface (API). PPR appreciates HL7 and CARIN efforts on the data models and semantics for both clinics and payers.

Consider the ongoing news about the data broker called Surescripts and the data processor called Amazon PillPack. The FTC is looking into whether Surescripts used its dominant data broker position illegally in restraint of trade. Surescripts, in a somewhat separate action, is claiming that barbarian PillPack is using patient consent to break down the gate it erected for its business purposes. From my patient perspective, does Surescripts have a right to aggregate my prescription history and then refuse me the ability to share that data with PillPack without special effort? 

The possible differences between CARIN and PPR pertain to how the barbarian is labeled and who maintains the registry or registries of the barbarians. The open questions for CARIN, PPR, and other would-be arbiters of barbary fall into four related categories:

1 – Labels Only

2 – Registries Only

  • For deployment efficiency, the the apps and services may be listed in controlled registries. The app could be registered by the developer of the app or by the operator (including a physician) that wants to use the app. This option is relevant because apps might have options the operator can choose that would change the criteria for a particular registry. Will registries support submissions by developers, operators or both?
  • Aside from labels, patients tend to infer reputation on the basis of metrics like the number of users and the number of reviews for an app. Do the registries list software operators along with the software vendors in order to promote transparency and competition?
  • Do the registries allow for public comment with or without moderation?

3 – Labels and Registries Combined

  • What should be the number of registries and would they require one or more of the available labels?
  • A typical app store policy is a low bar to enable maximum competition and reduce disputes over exclusion. Consumer rating bureaus, on the other hand, tend to issue stars or checkmarks in a handful of categories in order to reward excellence. Is our label and registry design aimed at establishing a low bar (“You must be this high to be a barbarian”) or promoting a “race to the top” (such as 0-5 stars in a few defined categories)?
  • To improve fairness and transparency, should the orgs that define labels be separate from the orgs that operate registries?

4 – “Without special effort”

  • Opening the gate to their own records is an established right for both the patient subject or the barbarian designated by the patient. Making this work “without special effort” requires implementation of standard dynamic client registration features that current gatekeepers have chosen to ignore. Should regulators mandate support for dynamic client registration, for any and all barbarians, as long as the app is only able to access the records of the individual patient exercising their right of access?

It seems that the definition of a barbarian is anyone who aims to get patient records under the current laws and the explicit direction of the patient. The opposite of barbarians, whoever they may be within the gates of HIPAA, are able to get patient records without consent or accounting for disclosures by asserting “Treatment, Payment, or Operations” as well as the pretense of de-identification. Meanwhile, these HIPAA non-barbarians are able to sell off the machine learning and other medical science teachings as “trade secret intellectual property” in the form of computer decision support and other for-profit algorithms. This hospital-led privatization of open medicine will contribute to the next round of US healthcare exceptionalism. 

And as for the patients, no worries; we’ll just tell them it’s about patient safety.

Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country.

]]>
https://thehealthcareblog.com/blog/2019/09/05/barbarians-at-the-gate/feed/ 1
Protecting Health Data Outside of HIPAA: Will the Protecting Personal Health Data Act Tame the Wild West ? https://thehealthcareblog.com/blog/2019/08/19/protecting-health-data-outside-of-hipaa-will-the-protecting-personal-health-data-act-tame-the-wild-west/ https://thehealthcareblog.com/blog/2019/08/19/protecting-health-data-outside-of-hipaa-will-the-protecting-personal-health-data-act-tame-the-wild-west/#comments Mon, 19 Aug 2019 13:40:16 +0000 https://thehealthcareblog.com/?p=96694 Continue reading...]]>
Vince Kuraitis
Deven McGraw

By DEVEN McGRAW and VINCE KURAITIS

This post is part of the series “The Health Data Goldilocks Dilemma: Privacy? Sharing? Both?”

Introduction

In our previous post, we described the “Wild West of Unprotected Health Data.” Will the cavalry arrive to protect the vast quantities of your personal health data that are broadly unprotected from sharing and use by third parties?

Congress is seriously considering legislation to better protect the privacy of consumers’ personal data, given the patchwork of existing privacy protections. For the most part, the bills, while they may cover some health data, are not focused just on health data – with one exception: the “Protecting Personal Health Data Act” (S.1842), introduced by Senators Klobuchar and Murkowski. 

In this series, we committed to looking across all of the various privacy bills pending in Congress and identifying trends, commonalities, and differences in their approaches. But we think this bill, because of its exclusive health focus, deserves its own post. Concerns about health privacy outside of HIPAA are receiving increased attention in light of the push for interoperability, which makes this bill both timely and potentially worth of your attention.

HHS and ONC recently issued a Notice of Proposed Rulemaking (NPRM) to Improve the Interoperability of Health Information. This proposed rule has received over 2,000 comments, many of which raised significant issues about how the rule potentially conflicts with patient and provider needs for data privacy and security.

For example, greater interoperability with patients means that even more medical and claims data will flow outside of HIPAA to the “Wild West.” The American Medical Association noted:

“If patients access their health data—some of which could contain family history and could be sensitive—through a smartphone, they must have a clear understanding of the potential uses of that data by app developers. Most patients will not be aware of who has access to their medical information, how and why they received it, and how it is being used (for example, an app may collect or use information for its own purposes, such as an insurer using health information to limit/exclude coverage for certain services, or may sell information to clients such as to an employer or a landlord). The downstream consequences of data being used in this way may ultimately erode a patient’s privacy and willingness to disclose information to his or her physician.”

Former ONC Coordinators submitted a letter of support for the provisions of the NPRM advancing interoperability but also expressed concerns about privacy and called for adoption of a comprehensive privacy framework to protect consumers.

Given Congress’ strong bipartisan support for interoperability, this may provide greater motivation for Congress to act to address the gaps in protections for health information – and it may be easier for Congress to pass a more focused privacy bill. It is also possible that this bipartisan bill could get incorporated into broader privacy legislation.

Who is covered?  Who is not covered?

The bill begins with extensive references to the 2016 Department of Health and Human Services (HHS) report, Examining Oversight of the Privacy & Security of Health Data Collected by Entities Not Regulated by HIPAA (the “2016 HHS Report”). That report described the limited scope of HIPAA, identified a broad scope of entities holding health information outside of HIPAA’s coverage, and recommended that Congress close the gaps in protections. To the best of our knowledge, this is the first bipartisan bill introduced to specifically respond to this HHS report. 

The bill does not cover all health data outside of HIPAA. Instead, the bill targets “operators” of “consumer devices, services, applications, and software” that are primarily designed for or marketed to consumers and “a substantial purpose of use of which is to collect or use personal health data.” (For purposes of this post, we’ll refer to them as Personal Health Data Tools.)  Personal Health Data Tools expressly include direct to consumer genetic testing services, mobile technologies, and social media sites. Personal health data is defined in a way similar to protected health information under HIPAA: information that relates to the past, present, or future physical or mental health of an individual and that “identifies the individual, or with respect to which there is a reasonable basis to believe that the information can be used to identify the individual.”

The bill seems to target the types of entities most likely to be collecting data from electronic medical records on behalf of, or with the consent of, patients, potentially addressing the very concerns expressed about interoperability initiatives.

But even within this narrow focus, there are limits to its coverage. 

The bill expressly does not cover products where “personal health data is derived solely from other information that is not personal health data” (for example, GPS data).  This language seems to exempt entities that collect social determinants data (such as age, income, education level, zip code) and use it for health purposes.

It also could be confusing when a product or service has a “substantial” purpose of use of collecting or using personal health data, particularly when collection of data that could ultimately be used for health purposes is not counted as personal health data. There also could be products where the data collection is not a “substantial purpose” of the business but rather a byproduct of delivering another service.  For example, an implantable device like a pacemaker generates data but the primary (arguably “substantial”) purpose of the device is to maintain healthy heart rhythms.

Also not covered are products or services “designed for, or marketed to” HIPAA covered entities and business associates, likely because those products and services would be covered by HIPAA.

What new requirements will apply to Personal Health Data Tools? New regulations.

The bill does not just extend HIPAA to operators covered by the bill. Instead, the bill sets out a process, on a fairly quick timeframe (although potentially not quick enough – see below) for developing privacy and security regulations that will apply to Personal Health Data Tools. 

The bill requires HHS, in consultation with the FTC and the HHS Office of the National Coordinator (ONC), to establish a task force of up to 15 members representing “a diverse set of stakeholder perspectives.” The Task Force, which will be governed by the Federal Advisory Committee Act (and therefore must conduct most of its meetings in public), has a year to develop a report to Congress, as well as to HHS, the FTC, and the Food and Drug Administration (FDA), with its findings.  The bill identifies the following as specific areas of focus for the Task Force:

  • Long-term effectiveness of de-identification methods for genetic and biometric data;
  • Security concerns (including cybersecurity risks) and standards to address them, for Personal Health Data Tools;
  • Privacy concerns and protection standards related to consumer and employee health data;
  • Reviewing the 2016 HHS Report and advising on whether it needs to be updated; and
  • Advising on resources to educate consumers about the basics of genetics and direct-to-consumer genetic testing.

After HHS receives the report of the Task Force, the bill requires HHS to publish privacy and security regulations to govern personal health data that is “collected, processed, analyzed or used by” Personal Health Data Tools within six months.  HHS is required to consult with the FTC, ONC, FDA, “relevant stakeholders,” and “heads of other Federal agencies as the Secretary considers appropriate” (possibly the Office for Civil Rights?), in developing these regulations. It is noteworthy that HHS is tasked with regulating this particular group of non-covered entities, as other bills pending in Congress would vest privacy authority with the FTC.  

The bill does not dictate particular privacy and security protections that HHS must apply to Personal Health Data Tools; however, the bill does require HHS to address a number of issues.  Specifically, the bill requires HHS to consider:

  • The findings of the 2016 HHS Report;
  • Regulations and guidance issued by the FTC, as well as the HIPAA regulations.
  • Uniform standards for consent related to genetic, biometric, and personal health data;
  • Exceptions to consent requirements, such as for law enforcement, academic research or research on health care utilization and outcomes, emergency medical treatment, or determining paternity;
  • Minimum standards of security that may differ according to the nature and sensitivity of the data collected by Personal Health Tools;
  • Appropriate standards for de-identification of personal health data; and
  • Appropriate limitations on the collection, use or disclosure of personal health data.

In developing regulations to address the areas identified above, HHS is also required to consider:

  • Developing standards for obtaining user consent that helps assure that consumers understand how their personal health data will be accessed, used, and shared;
  • How to limit the transfer of personal health data to third parties and provide consumers with greater control over marketing uses of their data;
  • Secondary uses beyond what the consumer initially consented to;
  • A process to permit withdrawal of user consent;
  • Providing a right of access for consumers to copies of personal health data; and
  • Providing a right to delete and amend personal health data, “to the extent practicable.”

Unresolved issues

Enforcement.  The bill gives HHS the authority to issue regulations but does not establish any penalty authority for violation of those regulations, leaving an open question as to whether there will be any way to hold entities accountable for complying with them. This is a pretty significant hole in the bill’s framework of protections.

Timing. If the bill is at least partially aimed at addressing concerns that could potentially derail or slow interoperability initiatives, Congress – and HHS – need to move quickly. If the timelines in the bill are kept, regulations could be proposed within 1.5 years of enactment. ONC is hoping to finalize the interoperability and information blocking regulations by the end of 2019, and the interoperability requirements would need to be installed by EHR vendors within two years after the rule is final.  So there are arguably some synergies to the timing of the new regulations and when interoperability initiatives will be fully implemented. But six months is a very short time for HHS to complete drafting regulations and get them through the federal clearance process, and getting to a final rule after rules have been proposed could add at least another year to that schedule. 

Which Rules Apply?  Although the bill tries to make clear that entities covered by HIPAA will not be subject to the new regulations, there likely still will be some confusion in coverage.  For example, there will be products who are both marketed to providers but also for consumer use (for example, some personal health record products that have both consumer-facing portals as well as provide data services to providers), which it more difficult to discern which sets of regulations apply (sorting that out is something HHS could tackle during the regulatory process). 

We’ll be keeping an eye on this bill, as we will with all of the privacy bills pending before Congress. Stay tuned for more. 

Deven McGraw , JD, MPH, LLM (@healthprivacy) is the Chief Regulatory Officer at Ciitizen (and former official at OCR and ONC). She blogs at https://medium.com/@ciitizen.

Vince Kuraitis, JD/MBA (@VinceKuraitis) is an independent healthcare strategy consultant with over 30 years’ experience across 150+ healthcare organizations .He blogs at e-CareManagement.com.

]]>
https://thehealthcareblog.com/blog/2019/08/19/protecting-health-data-outside-of-hipaa-will-the-protecting-personal-health-data-act-tame-the-wild-west/feed/ 2
A National Patient Identifier: Should You Care? https://thehealthcareblog.com/blog/2019/07/09/a-national-patient-identifier-should-you-care/ https://thehealthcareblog.com/blog/2019/07/09/a-national-patient-identifier-should-you-care/#comments Tue, 09 Jul 2019 12:00:41 +0000 https://thehealthcareblog.com/?p=96485 Continue reading...]]>

By ADRIAN GROPPER, MD

The rather esoteric issue of a national patient identifier has come to light as a difference between two major heath care bills making their way through the House and the Senate.

The bills are linked to outrage over surprise medical bills but they have major implications over how the underlying health care costs will be controlled through competitive insurance and regulatory price-setting schemes. This Brookings comment to the Senate HELP Committee bill summarizes some of the issues.

Who Cares?

Those in favor of a national patient identifier are mostly hospitals and data brokers, along with their suppliers. More support is discussed here. The opposition is mostly on the basis of privacyand libertarian perspective. A more general opposition discussion of the Senate bill is here.

Although obscure, national patient identifier standards can help clarify the role of government in the debate over how to reduce the unusual health care costs and disparities in the U.S. system. What follows is a brief analysis of the complexities of patient identifiers and their role relative to health records and health policy.

Patient Matching

Patient matching enables surveillance of patient activity across service providers and time. It can be done either coercively or voluntarily. We’re familiar with voluntary matching like using a driver’s license number to get a controlled substance prescription. People are not aware of the coercive matching that goes on without our consent.

Voluntary matching is cheap and reliable. Coercive surveillance for patient matching is quite expensive and prone to errors. Why would so many businesses promote the coercive alternative? It’s mostly about money. The relationship between health surveillance and money in the U.S. healthcare system is relatively unique in the world. The issue of a national patient identifier is also pretty specific to the U.S. The reasons, as all things in U.S. healthcare, are complicated. But, fundamentally, they boil down to two things:

  • Patients have a right to be treated without identification — what HIPAA calls “known to the practice” — but paying for that treatment clearly requires some identification.
  • The byzantine financial incentives in the U.S. system mean that thousands of data brokers have a financial interest in the hidden surveillance. Otherwise, they would just ask patients for consent.

Insurance

Payers already have a patient identifier. The impact of adding a surveillance component, either voluntary or coercive, is hard to estimate. Would patients have a choice of plans with or without coercive surveillance? Would we need regulations, similar to GINA, to reduce the risk of biased interpretation? I’m not aware of any insurance industry comments on the House national patient identifier amendment.

All Payer Claims Databases

Pretty much everyone in the health care “system” is working as hard as they can to avoid transparency. Transparency of quality, of cost, of data uses, of directories, of “black box” and artificial intelligence algorithms, and more. The principal strategy for both the House and Senate versions of the cost reduction bills is to increase transparency, but that could be achieved with either coercive or voluntary identifiers.

Prescriptions

Coercive patient surveillance is already in place on a massive scale. Surescripts tracks over 200 million U.S. patients and sells that information for all sorts of purposes without patient consent or obvious oversight. In theory, one can opt-out of Surescripts. In practice, it’s practically impossible. (I tried it.) I did find errors in my file. Even fixing those errors was more trouble than it was worth. Would Surescripts’ coercive surveillance be mitigated by a national patient identifier? Quite possibly, if the final legislation introduces privacy protections, such as opt-in and real-time patient notification by Surescripts or anyone else that is making use of the identifier.

Known to the Practice

HIPAA encourages a trusting physician-patient relationship by allowing confidential and even anonymous consultation. This promotes public health. The implementation of a national patient identifier must preserve this option.

TEFCA

The federal government has been trying to create a national network for health records for over a decade. The current state is the Trusted Exchange Framework and Common Agreement (TEFCA) Draft 2. TEFCA is still far from obvious with major detractors from the incumbents and no clear solution to the very hard problems of regulatory capture of standards, security, consent, and patient matching. Three comments by Patient Privacy Rights address these issues.

Aside from moving patient data from here to there, TEFCA aims to provide a surveillance mechanism that will track the locations where patients receive health services. This can be quite useful for maintaining a longitudinal patient record, measuring outcomes, and informing research, as well as policy.

But a national surveillance system can also spook patients and increase public health risks if populations concerned about bias and loss of opportunity hide or actively game the system. It’s therefore essential to design TEFCA with the highest level of privacy and transparency, similar to what we have in finance. A national patient identifier will help TEFCA, but only if it is voluntary (linked to consent), transparent (to mitigate security risks), and most importantly, if it replaces the current design based on coercive surveillance.

Privacy

People already have any number of national-scale identifiers. Mobile phone numbers and the unique device identifiers that phones broadcast just by being on, email addresses, driver’s license numbers, Medicare and private insurance IDs, a Social Security Number, and credit cards. What matters for privacy is not the existence of personal identifiers but how they are used. Is the usage regulated? Does use in one domain, e.g. purchasing, cross over into another domain such as taxation? Is the use of the identifier voluntary like when you sign to allow your credit surveillance history to be accessed by an auto dealer or a landlord? Are you notified whenever an identifier is used? Are there usage logs and statements conveniently available to you? A national patient identifier will need to answer all of these questions and more.

Errors, bias, and ethics

Every large system is subject to errors, bias, and ethical issues. The proponents of a national patient identifier make self-serving arguments about reducing errors, such as assigning data to the wrong patient, without a critical analysis of how errors might be intentionally or accidentally introduced into the system. Other questions include how patients can catch errors or omissions and how access to a national identifier might bias relationships with employers or a new generation dating sites. The ethics of health care are mostly about the unintended consequences of what superficially seems like a good idea.

Coerced, Voluntary, or Self-sovereign

Self-Sovereign Identity (SSI) that is cryptographically secure and controlled by the individual person. If we introduce a national identifier, for patients or any other large-scale use, in 2020, should that identifier be compatible with SSI?

Independent Patient-Controlled Longitudinal Health Record

A new national patient identifier is not an end in itself, it must serve or enable something new. That new thing could be universal healthcare coverage, which exists in almost every other developed economy. Another new thing would be a longitudinal health record that is independent of any particular public or private institution. An independent health record would promote competition, enable greater transparency of outcomes and costs, and it would significantly reduce the costs of research and innovation. It’s important to design TEFCA and other federal programs around the outcome rather than a tweak of the process.

Non-HIPAA Components

What would be the scope of a new national patient identifier? Should it be used to add non-HIPAA components like exercise or diet to a patient’s record? Should it apply to over-the-counter purchases in pharmacies or telemedicine from outside the US? Will the new identifier expand the scope of surveillance by Facebook, Google, and other hard-to-avoid platforms?

Should you care?

Yes. The uniquely high U.S. health care costs are now driving politics directly and indirectly. Universal coverage could be the top issue in 2020. But health costs also impact immigration discussions, as well how we deal with technology-driven shifts in employment and employer-based insurance.

Bi-partisan efforts such as the “surprise medical bills” legislation now before the House and Senate are aimed at health care cost outcomes and the balance of power between hospitals, payers, patients, physicians, and regulators. That balance of power was swept under the political rug in previous efforts. With health care waste and fraud running at about $1.5 trillion or 6 percent of GDP, the debate over a national patient identifier should not be about the process of patient matching but over the path to increased transparency, competition and innovation.

Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country. This post originally appeared on Bill of Health here.

]]>
https://thehealthcareblog.com/blog/2019/07/09/a-national-patient-identifier-should-you-care/feed/ 6
Our Cancer Support Group On Facebook Is Trapped https://thehealthcareblog.com/blog/2019/05/30/our-cancer-support-group-on-facebook-is-trapped/ Thu, 30 May 2019 13:00:24 +0000 https://thehealthcareblog.com/?p=96302 Continue reading...]]> Our Experience on Facebook Offers Important Insight Into Mark Zuckerberg’s Future Vision For Meaningful Groups

By ANDREA DOWNING

Seven years ago, I was utterly alone and seeking support as I navigated a scary health experience. I had a secret: I was struggling with the prospect of making life-changing decisions after testing positive for a BRCA mutation. I am a Previvor. This was an isolating and difficult experience, but it turned out that I wasn’t alone. I searched online for others like me, and was incredibly thankful that I found a caring community of women who could help me through the painful decisions that I faced.

As I found these women through a Closed Facebook Group, I began to understand that we had a shared identity. I began to find a voice, and understand how my own story fit into a bigger picture in health care and research. Over time, this incredible support group became an important part of my own healing process.

This group was founded by my friends Karen and Teri, and has a truly incredible story. With support from my friends in this group of other cancer previvors and survivors I have found ways to face the decisions and fear that I needed to work through.

Facebook recently had a summit to share that groups are at the heart of their future. We had a summit of our own with some of the amazing leaders within the broader cancer community on social media.

Our Support Group is a Lifeline. And We’re Not Alone.

As group of cancer previvors and survivors we’re not alone. Millions of people go online every day to connect with others who share the same health challenges and to receive and provide information and support. Most of this happens on Facebook. This act of sharing stories and information with others who have the same health condition is called peer support. For many years there has been a growing body of evidence that peers seeking information from each other can and do improve the way they care for themselves and others. Today many of these peer support groups exist on Facebook.

Source: Susannah Fox + Reframe Health

Our Support Group Is Trapped. We Cannot Leave.

I know what anyone reading this might be thinking if you have experienced a peer support group. After all the terrible news about Facebook and privacy, why would ANYONE share sensitive or private health information on Facebook?!

Sending out an SOS to anyone who can help us. Photo credit: Radub85

The truth is: we really have no choice. We’re trapped. Many of these health communities formed back before we understood the deeper privacy problems inherent in digital platforms like Facebook. Our own group formed back in 2009 when Facebook was the “privacy aware” alternative to MySpace. And because they grew so big the network effect becomes very strong; patients must go where the network of their peers live. We started out as a small collection that organically grew over time to become bigger and more organized. This dilemma of the network effect is illustrated beautifully in an Op-Ed by Kathleen O’Brian, the mother of a child with autism who relies on her own peer support group and who wishes that she could jump ship but cannot leave.

People turn towards peer support groups when we fall through the medical cracks of the healthcare system. When facing the trauma of a new cancer diagnosis and/or genetic test results, the last thing on your mind is whether you should be reading 30 page privacy policies that tech platforms require. Rather, patients need access to information. Patients need it fast. We need it from people who have been down the same path and who can speak from personal experience. And that information exists within these peer support groups on Facebook. We need to be protected when we are vulnerable to those who can use information about out health against us.

Our awakening to deep cybersecurity problems.

My own experience with peer support groups took a terrifying turn last April. After the news of Cambridge Analytica broke in headlines, I asked myself a simple question: what are the privacy implications of having our cancer support group on Facebook?

As a geek with a professional background in tech, I thought it might be fun to do some research after looking at the technical details of what happened with Cambridge Analytica. As I looked at the developer tools on Facebook’s platform, I began to get concerned. Not long after this initial research, I was lucky enough to meet Fred Trotter, a leading expert in health data and cybersecurity. I shared this research with Fred. What followed next for me was a crash course in cybersecurity, threat modeling, coordinated disclosure, and learning about the laws that affected our group. Fred and I soon realized that we had found a dangerous security flaw that scaled to all closed groups on Facebook.

Since discovering these problems and navigating submission of this vulnerability to Facebook’s security team, our group has been desperately seeking a feasible path forward to find a safer space. We have awakened to the deeper issues that created breach after breach of data on Facebook. It seems like every day we hear about a new data breach and a new apology from Facebook.

Our trust is gone. But we’re still trapped.

The lasting impact of peer support group privacy breaches

When health data breaches occur, members of vulnerable support groups like ours are at risk of discrimination and harm. Women in our own support group can lose jobs and healthcare when health information generated on social media is used to make decisions about us without our knowledge or consent. For example, health insurers are buying information about my health — and potentially can use this to raise my rates or deny coverage. And 70% of employers are using social media to screen job candidates.

For me, these security problems raise questions about the lasting impact on our group when data is shared without our knowledge or consent. Without transparency and accountability from these tech companies on their data-sharing practices, how will we ever know what decisions are being made about us? If the data generated in the very support groups these patients need to navigate the trauma of a health condition is used against group members who is being held accountable?

There is a stark contrast between Facebook’s rhetoric about “meaningful groups” and our current reality. We are trapped. Who is protecting these vulnerable groups? Who is being held accountable if and when the privacy and data generated by these groups are breached and used against their members? What are the solutions that give us the ability to trust again?

Does Our Support Group Have Any Rights?

Over this past year we have done a lot to try and understand what are rights are. Digital rights for groups like my own really do not exist. I have been reflecting on how when someone is arrested a police officer will read someone their Miranda rights.

“You have the right to remain silent. Anything you say can and will be used against you.”

This is really our only right at the moment. These words keep repeating in my mind as I think about our group’s current predicament of what to say and not to say about health on social media. From the perspective of a cancer support group, it seems we’ve reached a point where anything we share on Facebook can be used against us… by third parties without our knowledge or consent. As we lose our trust, we stop engaging. We stop trusting that it is safe to share things with each other in our group. We become silent. Moreover, our group cannot simply pick up and leave. Where would we go? What happens to the 10 years of work and resources that we created on Facebook, which we would lose? How do we keep the same cycle from repeating on a new platform?

At the root of this problem there are gaping holes in consumer privacy rights that might protect our group. While there are rules about health data breaches from the FTC there has been no enforcement to date. We are watching and waiting to see what the FTC might do. And while health information shared in hospitals, clinics, and doctors’ offices is protected by HIPAA, no such protection applies to the enormous amount of personal health information provided to social networks every day. The millions of people who convene through support groups are in a highly vulnerable position, and are currently powerless to change the dynamic to one in which they have protections and rights.

Congress and the FTC have held numerous hearings about a path forward to protect consumer data privacy, and a central theme for these dialogues is what to do about Facebook. There have been hearings upon hearings held by the FTC on consumer privacy in the 21st century. Recent hearings in Congress include those at the Senate Commerce Committee. While these hearings show a generalized desire to enact meaningful change, and some recognition of the urgency of the problem, I cannot help but notice the lack of representation in these dialogues from actual consumers who are affected by these privacy problems. I have held onto hope that there would be meaningful policy discussion about how to protect these vital peer support communities, but realize that we must help ourselves.

“We Take Your Group’s Privacy Very Seriously.”

Last year, we started a dialogue with Facebook’s teams after submitting our security vulnerability via the white hat portal. I heard over and over again from people at Facebook: “we take your privacy very seriously.” But Facebook never publicly acknowledged or fully fixed the security problems created within their group product. In fact, Facebook directly denied that there was ever a privacy and security problem for our groups.

Given this experience, you can imagine my surprise this week when Mark Zuckerberg announced his big new plans for Facebook. After a heartwarming commercial of a twenty-something finding her people in meaningful groups, Zuck walks onto the stage and declares: “The future is private.”

Our support group had reasonably expected the present and past to be private too.

Watching the F8 Summit my heart sank. It seems we must all submit to this future that Facebook imagines for us. A future where problems and abuse in Silicon Valley are swept under the carpet. Where no one is accountable. A future where exploitation of our data lurks just underneath the surface of all the heart-warming rhetoric and beautiful design for meaningful groups. Currently Facebook Groups have one billion users per month. Our trapped group is just one example of so many that are at the heart of Facebook’s future as a company.

These groups go beyond health to others seeking support for a shared identity. Active duty military. Survivors who have lost a loved one. Moms needing support from other moms. Cybersecurity professionals. In extreme cases the information in vulnerable groups can be weaponized. For example there were groups for the Rohingya in Myanmar and groups to support sexual assault survivors that are now quiet or have been deleted.

Facebook unveils that Groups is at the heart of Facebook’s future.

It seems that the data that has made this company so wealthy is still a priority over our security and safety. I quietly watch the reactions to the latest Facebook event, and the lack of any responsibility to the people in groups like my cancer support group.

We Cannot Remain Silent

When I think about my support group of cancer previvors and survivors, I feel strong and brave. I fear retaliation writing this because we are truly vulnerable on the platform where we reside. Yet, we can’t remain silent. We don’t want any more empty promises from the technology platforms where we reside. We would rather not be appeased with shiny new features and rhetoric about privacy.

Rather, we seek autonomy. We seek a way to take our own power back as a group. We seek to protect our shared identity as a group and make decisions collectively. We seek to protect any data that is shared. There is something truly unique about the shared identity of our support group: we have always done things on our own terms. We are ten thousand women who have faced really hard realities about our future.

Facebook did not create our incredible groups. We did. We’ve worked hard for ten years cultivating this online group for a simple reason: we wanted our group to feel less afraid and alone than we felt in the beginning. Facebook does not have a monopoly on any vision for our future. The data generated within these groups is not an abstraction to us. It represents generations of suffering. Our own suffering. Our families’ suffering. We have an urgent need to develop a new way forward that protects our identity, and the future of our groups. We will create the future we choose for this community. That future exists with or without Facebook.

If you are in the same boat, please reach out to us here.

Andrea Downing. Previvor | Community Data Organizer | Accidental Security Researcher. This post originally appeared on Tincture here.

]]>