U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Roundtable on Value & Science-Driven Health Care. Clinical Data as the Basic Staple of Health Learning: Creating and Protecting a Public Good: Workshop Summary. Washington (DC): National Academies Press (US); 2010.

Cover of Clinical Data as the Basic Staple of Health Learning

Clinical Data as the Basic Staple of Health Learning: Creating and Protecting a Public Good: Workshop Summary.

Show details

5Healthcare Data as a Public Good: Privacy and Security

INTRODUCTION

Any consideration of clinical data as a public good raises questions concerning the safety and security of individual patient records. Maintaining confidentiality of data records is of paramount importance. Public perceptions of privacy in the context of medical records links directly to the trust the public has in the entire healthcare establishment, and factors significantly into discussions of health data sharing. The complex issue has many challenging dimensions, from what happens after the initial intake of an individual’s data to what happens in data aggregation and secondary use. This chapter provides commentary from four experts considering key legal and social challenges to privacy issues from a variety of perspectives, including public opinion, the implications of the Health Insurance Portability and Accountability Act (HIPAA), and institutions’ experiences inside and outside of health care.

To provide insight into the public views on privacy issues in health care, Alan Westin, professor emeritus of public law and government at Columbia University and principal of the Privacy Consulting Group, presents outcomes of the 2007 national Harris/Westin survey that evaluates public attitudes toward the current state of health information privacy and security protection.1 The survey examines attitudes about handling sensitive patient information, health research activities involving individual patient data, and opinions on the extent to which trust is accorded to health researchers by the public. The results indicate that the public holds strong privacy concerns about how their personal health information is handled, especially uses of data not directly relevant to providing care. The survey also indicates that current laws and organizational practices may not provide adequate privacy protection for patients. Westin suggests that patient-controlled privacy policies, such as those offered through repositories of personal health records, might help with gaining traction on the issues of clinical data, privacy, and security with the public. He also recommends a scope of activities related to health privacy, patient notice, and public education on privacy and compliance as opportunities to provide evidence-based medicine (EBM).

Balancing patient privacy protections with advancing data-driven clinical research and care delivery is an ongoing challenge for many healthcare organizations. In 2003, the HIPAA Privacy Rule took effect, and early changes to the Rule permitted sharing healthcare data for restricted purposes, essentially easing some limitations on providers and health plans related to health services research. With the increased incorporation of electronic health records (EHRs) into care delivery and research, the growing volumes of valuable data for evidence-based research and care may eventually force significant changes to strike a balance between privacy and advancement. Marcy Wilder, a partner in the law firm of Hogan and Hartson, LLP, and former deputy general counsel at the Department of Health and Human Services (HHS), where she helped to develop HIPAA, comments on some important remaining legal barriers to effectively using clinical data for research. In particular, Wilder highlights the growing opportunity to address the confluence of future, unspecified research and individual rights regarding the use of individual data through policy. Also notable are her suggestions of formally reviewing HIPAA deidentification standards, safe harbor requirements, and distribution of liability burdens across covered and noncovered entities.

Providing examples of other sectors’ approach to striking a balance between privacy and security and research innovation, Elliott Maxwell, a fellow in the communications program at Johns Hopkins University and distinguished research fellow at Pennsylvania State University, discusses the notion of data openness as demonstrated through projects such as the Human Genome Project. Examples of greater openness are also prevalent in the public registration of clinical trials and open-access journals. Greater digital openness has the potential to transform the use and application of clinical data in EBM, Maxwell suggests, but it must be tempered with determinations on the appropriate level of openness for given purposes. Maxwell provides an overview of the Committee for Economic Development’s report Harnessing Openness to Transform American Health Care, including recommendations on patient consent requirements, electronic filing of device and drug approvals, and EHR adoption incentives. The report advocates for increased federal support for large, clinical databases to accelerate advancements in EBM and standards development.

The quality of clinical care and access to care services are ubiquitous issues in American health care. The public demands higher quality care at lower costs with greater access. Healthcare data are uniquely positioned to provide deep insights into care delivery processes and outcomes. Simultaneously, provider organizations must secure individual patient health information and improve the coordination and quality of care. The tension between access to insight-generating data and security of health data continues to create significant barriers for organizations striving to provide clinical services. Alexandra Eremia, associate general counsel and corporate privacy officer at MedStar Health, Inc., discusses perceived and actual privacy or security hurdles experienced at healthcare delivery organizations nationwide. She elaborates on the opportunities for building trust in patients, structuring organizational policies and strategies to avoid adverse legal outcomes, and making strategic fiscal decisions associated with data retrieval and release for research. Addressing these opportunities through financial, strategic, regulatory, and public initiatives may advance access to healthcare data for research and EBM purposes.

PUBLIC VIEWS

Alan Westin, Ph.D., L.L.B.

Principal, Privacy Consulting Group

Based on a national Harris/Westin survey in 2007 sponsored by an IOM project, this paper will describe public attitudes toward the current state of health information privacy and security protection; health provider handling of patient data; health research activities; and trust in health researchers. The public is segmented into persons who have participated in health research projects, those who have been invited but declined (and why), and those never invited. Members of the public are identified who believe their personal health information has been disclosed improperly and by whom. Explaining the benefits and risks involved in having one’s personally identified health records used in health research, the paper explores what kinds of advance patient/consumer notice and consent mechanisms are desired by various subsets of the public. Potential privacy harms are documented that patients see if their health records are used without notice and choice mechanisms, or disclosed improperly. The findings are applied to emerging large-scale health data systems, especially new online personal health record repositories and health data-mining programs. In terms of positive actions suggested by these survey results, updated federal health privacy rights in legislation supporting information technology/EHR programs are discussed, as are national educational campaigns on the values of health research under robust health privacy rules or procedures, and new software tools to put direct control over the uses of health records into the hands of individual patients, through an individually driven “switch” mechanism between health data providers and health-research data seekers.

Privacy is pervasive in terms of the future of health information technology (HIT). How the public feels about privacy issues links directly to the trust level that people have in the entire healthcare establishment, and factors significantly in the move to EHRs, personal health records, interoper-ability exchanges, and so forth. Trust is a fragile commodity. Anything that profoundly threatens the trust that patients have in the healthcare system and in health researchers is a very dangerous step. We need to be careful, and my hope is that the survey data reported here will document this.

A national survey sponsored by an IOM working committee (Committee on Health Research and the Privacy of Health Information: The HIPAA Privacy Rule) investigated how the public feels about privacy in health care and the use of their information across the spectrum of healthcare operations. The survey’s sample was 2,392 respondents who were 18 years of age and older. The data were adjusted to represent the entire population of 255 million persons age 18 years and older. We could analyze survey results not only by the majority, but also by health status groups, by standard demographics, by people who reported on their personal experiences in health-care use, and by their policy attitudes. This paper presents only top-level results; the full 2007 survey project report is available through the Public Access Records Office of The National Academies ( ude.san@cacilbup).

The survey formulated four statements and asked people to agree or disagree with each statement. The first statement was about how much people trusted their own healthcare providers—doctors and hospitals—to protect the privacy and confidentiality of their personal medical records and health information. A significant 83 percent expressed such trust, a result confirmed by many other surveys. (See Appendix D in the full 2007 survey project report, available from the IOM as shown above.) These surveys have shown high trust in the healthcare provider establishment as manifested in the direct relationships among the patient, doctor, labs, hospital, and so forth.

However, when we asked people whether a healthcare provider ever disclosed their personally identified medical or health information in a way they believed was improper, 12 percent said yes. That represents roughly 27 million adults. The survey report shows how many said the information was disclosed by their doctor, their hospital, their pharmacy, their lab, their insurer, and others. This response indicates that a significant segment of the public is really not comfortable with the way even their healthcare providers have handled their confidential information.

The second question was how much people agreed with this statement: “Health researchers can generally be trusted to protect the privacy and confidentiality of the medical records and health information they get about research subjects.” Sixty-nine percent said they agreed with that statement; fewer than for the healthcare providers, but still, a two-thirds majority endorsement of the health research function as seen by the public.

Our third statement asked for agreement or disagreement with this presentation: “The privacy of personal medical records and health information is not protected well enough today by federal and state laws and organizational practices.” In previous health and consumer privacy surveys, we have worded this statement both ways: Sometimes we asked people to agree or disagree with the statement that privacy is “well enough protected,” and the results come out the same. Fifty-eight percent of the public in this IOM survey said they do not believe there is adequate protection today for their health information, either from laws or from organizational practice. This suggests that HIPAA has not created a sense of comfort and security in the majority of the population. My sense is that this judgment is being driven in part by the constant reporting of health data breaches taking place, such as theft of laptops with medical information, improper disposal of hardcopy medical records, and insiders leaking medical information. Such losses may not be at the same incidence level as the theft of financial information or identity theft through capture of consumer data. But reporting of medical data breaches contributes, in my view, to the judgment of a national majority that their medical information is not effectively secured today.

Finally, we asked people to agree or disagree with this statement: “Even if nothing that identifies me were ever published or given to an organization making consumer or employee decisions about me, I still worry about a professional health researcher seeing my medical records.” The public is split right down the middle: 50/50. Half agree with the sense that there is an exposure that worries them and half are comfortable. Underlying this finding was probably the feeling that “if strangers are looking at my sensitive medical information, I am not quite comfortable with that.” The full report shows that this is more strongly felt by people who have potentially stigmatizing health conditions, such as those who use mental health services, have HIV or sexually transmitted diseases, have taken a genetic test, and so forth. Demographics and health status would give some subsets of the public an even stronger than 50 percent concern about this.

Given the mission of the IOM committee that sponsored the survey, our prime focus was on how people would relate to health research per se. Consequently, we asked people how interested they would be in reading or hearing about the results of new health research studies, causes and prevention of diseases, and effectiveness of new medications and treatments. We cast the net widely and did not limit it to narrow, clinical trial-type health research. Matching other surveys, three-quarters of the public (78 percent) said they were interested in tracking that kind of health research.

Perhaps the single most important focus of our study was when we asked people whether they were ready to have their personally identified health information used by health researchers, and, if so, what kind of notice and consent they would want to have provided. The fact that this was an online survey enabled us to ask a detailed and carefully crafted question that described how health research is done and gave the arguments of health researchers in favor of general advance consent or consents based on promises of confidentiality and human subject or Privacy Board oversight. We also put in comments of “some people” that only notices describing the researchers, the research topic, and the research result uses would ensure adequate privacy protection.

Having presented our lengthy question, we asked people to choose one of five alternatives that best expressed their view. These were randomly presented to mitigate any presentation-order bias. A miniscule 1 percent said that researchers would be free to use their personal medical and health information without their consent at all. We might characterize this group as “let it all hang out.”

Eight percent said they would be willing to give general consent in advance to have their personally identified medical or health information used in future research projects without the researchers having to contact them. This small group might be characterized as a segment of the national population having a “high trust in the research establishment.”

Nineteen percent said their consent to have their personal and medical health information used for health research would not be needed as long as the study never revealed their personal identity and was supervised by an Institutional Review Board (IRB). These respondents were ready to trust such general researcher assurances.

The largest group, 38 percent, equivalent to about 97 million adults in the population, chose the following response: “I would want each research study seeking to use my personal identified medical or health information to first describe the study to me and get my specific consent for such use.” Clearly what is on the mind of this group is an insistence on knowing who is doing the research, what the topic is, and how the information is going to be used.

Finally, 13 percent said they do not want researchers to contact them or use their personal health information under any circumstances. This might be called the “no trust at all” segment of the public.

However, one in five, or 20 percent, of respondents simply could not make up their mind. The fact that they could not choose one of the five alternatives suggests that a large group out there needs to be better informed or to have the choices put to them in a way that they can recognize and then make a choice. A 20 percent nonresponse rate is quite unusual in policy-related survey research of this kind.

We asked those people who would require notice and express consent why they were adopting this position, providing four possible reasons. As one might expect, 80 percent chose “I would want to know what the purposes of the research are before I consent.” Sixty-two percent said, “knowing about the specific research study and who would be running it would allow them to decide whether I trusted them.” Fifty-four percent said they “would be worried that their personally identified medical or health information would be disclosed outside the study,” and 46 percent would want to know whether disclosing such information would help them or their family.

When we turned to what kind of harm the 38 percent believed could take place if personally identified health information was disclosed outside the study group, the answers primarily focused on discrimination. Privacy and discrimination values have always been closely linked. One claims privacy in order to protect oneself against being discriminated against in some benefit or opportunity. Here, results showed that people worry that distribution of their medical data could affect their health insurance, their ability to get life insurance, or their employment, or that it could result in their being discriminated against in a government program. The smallest number (33 percent) worried about embarrassment in front of friends, associates, or the public.

Now, here are some overall impressions about the survey results. First, this survey confirms, as many surveys have shown, that large majorities of the public hold strong concerns over the privacy and handling of their personal health information, especially concerning secondary uses of the data not in the direct-care setting. A strong majority, 58 percent, do not believe that current laws and organizational practices provide adequate privacy protection. The majority generally trust health researchers (albeit researchers undefined as to what kind they are) to maintain confidentiality, but what some researchers might hope for—that a promise of nonidentification and IRB review would persuade the public to give advance general consent—is not where the majority of the public is ready to come out at the present time. Also, even though we told people that researchers were concerned about the heavy costs in getting advance notice and consent, or that this might corrupt samples from statistical validity, that was not enough to persuade a majority. However, it is fair to say that surveys would get some different numbers if different kinds of researchers and topics were specified, so this is a variable to be understood.

What are the implications of our survey for expanded health data uses? Clearly, we are in transition from a part paper and part electronic record realm to an interoperable world of electronic health records, personal health records, and huge new online personal health data pools. This opens some potentially valuable public-good health researcher possibilities. Privacy, however, is a make-or-break issue for whether we are going to be able to achieve those advantages from large-scale health data research through electronic communication and transmission.

Of course, privacy is not an absolute. Rather, privacy is a matter of balance and judgment, and it is very contextual. Still, unless we can create what the National Committee on Vital and Health Statistics called a new data stewardship responsibility for health data holders and secondary users, we are going to lose the balanced-privacy battle, with the risk of sharp limits being placed on using personal health data for very important health research.

What elements would provide a positive health privacy context for health research? First, we need new legislation. HIPAA is outdated, as many people have said. The late Senator Edward Kennedy proposed support for HIT and EHR systems but, already, bills have been introduced by Senator Patrick Leahy and Representative Ed Markey to add strong privacy protections to any bill that will support the health information technology cause. Without my endorsing any of those bills specifically, it is clear we will have to write a new code of privacy confidentiality and security into the legislation that is going to help to organize and finance EHRs.

Second, excellent models of voluntary patient control privacy policies are being offered by some new repositories of personal health records. Microsoft’s HealthVault is one example; Google Health has indicated it will do the same when it issues its health product shortly. Such models need to be encouraged and modeled by many others.

Third, we need independent health privacy audits and compliance verification processes. Although no instrument is ready now to carry this out in the health information technology field, new organizations with the right mixture of nonprofit, for-profit, government, and consumer groups could be developed. Such meaningful audit and verification mechanisms are absolutely necessary for public acceptance and trust of the new large-scale health research enterprises.

Fourth, there are some new, easy-to-use technologies for implementing patient notice and choice—not “trust me, I am going to store your data, I will only give it to the people you want,” but rather some new “switch, not store” programs. These will register patients and collect their privacy preferences. Then, they will connect data seekers—such as health researchers—with the data holders (providers, insurers, Regional Health Information Organizations, etc.) and facilitate the exchange of that information, without the data content ever being kept by the switch. This interesting idea could revolutionize the ability of patients to make informed decisions about the use of their personal information in health research.

Fifth, we need to conduct serious field research into how privacy is unfolding in the EHR programs being developed. Researchers need to survey the patients involved in EHR programs as well as to talk, onsite, face to face, about what experiences they have had, what worries them, whether and how those worries have been solved, and so forth. Otherwise, one is back at 10,000 feet talking abstract principles about EHR programs and privacy satisfaction. It would be highly valuable to fund and manage a program of empirical studies of the impacts of EHR systems on privacy, confidentiality, and security values.

Finally, the health establishment needs to sponsor a major national educational campaign to promote privacy-compliant, evidence-based health research. Without such a national campaign, the danger is that the balance side—the public-good aspect of sharing patient medical data—will not be fully appreciated by the current privacy-sensitive public.

HIPAA IMPLICATIONS AND ISSUES

Marcy Wilder, J.D.

Partner, Hogan & Hartson, LLP

This paper will address the HIPAA Privacy Rule (45 C.F.R. § 164) and its effect on data research. As healthcare and HIT systems evolve, experience suggests that modifications are needed to strike the proper balance between protecting patient privacy and making data available for research to improve healthcare quality and to lower costs. Early advocacy efforts by the research community resulted in changes to the Privacy Rule that lightened some of the administrative burdens on healthcare providers and plans associated with making data available for research purposes. In addition, HHS revised the Rule to permit disclosures of limited datasets for research purposes. Identifying and developing policy alternatives for addressing the most significant barriers that remain, including those related to future unspecified research and data deidentification, will be essential to promoting the research enterprise.

The HIPAA Privacy Rule was the first comprehensive federal health privacy regulation. At the time of its drafting, HHS was focused on protecting privacy and ensuring that information would continue to be available within the healthcare system for appropriate uses. HHS set a baseline, making clear that health information could be used freely for treatment, payment, and healthcare operations. Policy makers were also clear that before health information could be used for marketing, an individual’s authorization would be required. The extent to which health information should, as a policy matter, be made available for research was far less clear. HHS, other federal agencies involved in the HIPAA rule making, healthcare stakeholders, and consumer advocates did not agree among themselves or with each other. Many believed research should not be placed in the same category as treatment, payment, and healthcare operations. But at the same time, they did not believe that individual authorization should always be required before protected health information (PHI) could be used for research purposes.

Some in the research community argued that HIPAA does not and should not regulate research per se and that the Privacy Rule simply should exempt research uses and disclosures.2 For nearly 25 years the Common Rule for Protection of Human Subjects (“Common Rule”)3 had regulated research privacy. IRBs were already tasked with determining whether protocols contained provisions adequate to protect the privacy of subjects and the confidentiality of data. The notion was to leave the current Food and Drug Administration (FDA), Office of Human Research Protections, and state regulatory frameworks in place and undisturbed by HIPAA.

This argument, however, was ultimately rejected by regulators. HIPAA restricted access by researchers to PHI, which at that time was held by healthcare providers and health plans. These HIPAA-covered entities would need guidance on how they were to treat uses and disclosures of PHI for research purposes. In addition, although longstanding protections were in place, some privacy advocates believed current protections were not sufficient. When HHS ultimately did address issues related to research uses and disclosures, it did not attempt to harmonize HIPAA with the existing regulatory framework for human subjects’ protection. It simply added yet another layer of regulation.

By 2002, 2 years after the Final Rule was issued, there was enough experience to suggest that the HIPAA Privacy Rule was unnecessarily creating barriers to medical research and that some provisions needed to change. The research community focused a great deal of effort on the deidentification safe harbor and the fact that data stripped of all requisite fields were not useful for many types of important research. The Department’s response was to add provisions permitting the disclosure of limited datasets for research, provided that a HIPAA-compliant data use agreement was in effect.

Under HIPAA, as initially promulgated, before information could be freely used for research, it needed to be deidentified under strict standards. In response to concerns expressed by the research community, HHS introduced the notion of a limited dataset, which is essentially deidentified data plus ZIP Codes, dates of service, and other dates related to the individual.4 If a party wanted to use this partially deidentified information for research, it could enter into a data use agreement, the contents of which are prescribed by the regulation, promising to protect the information. Once the agreement was executed, the dataset could be released for research purposes. These provisions did enable researchers to obtain health data more easily. Although there is a question as to whether these provisions are sufficient, they clearly helped.

In addition, in 2002 HHS provided an alternative to the accounting of disclosure requirement.5 The accounting of disclosure requirement mandates that when covered entities such as hospital systems and health plans disclose information for research purposes pursuant to an IRB waiver, they need to keep an accounting of these disclosures and make it available to individuals on request. Keeping individualized records about which records were disclosed for which research protocols operating under an IRB waiver of consent was seen as quite burdensome by the covered entities. As a result, many covered entities, and in particular smaller hospitals and those not affiliated with an academic institution, were restricting access to data.

HHS came up with an alternative. Instead of keeping track of every time data were disclosed pursuant to an IRB waiver, an institution could keep a list of all the research protocols for which information was disclosed pursuant to an IRB waiver for research purposes. Anyone requesting an accounting of disclosure would be given the entire list, which for institutions such as an academic medical center could be voluminous and burdensome to maintain. On request for an accounting of disclosures, the list would be provided and the individual would, in effect, be told that perhaps his or her information had been disclosed for one of the protocols on the list. The extent to which this is privacy protective or helpful to the individual is questionable at best. It seems to constitute an example of a privacy protection or a requirement that imposes cost and burden, yet does not deliver any meaningful privacy protection. Nonetheless, that is the current standard.

Experience over the past few years has helped highlight the need for further changes. The landscape surrounding research data has changed considerably, due in large part to significant technological changes that permit data aggregation on a scale that was previously unimaginable. In addition, emerging technology used by Google, Microsoft HealthVault, Dossia, WebMD, and others that will be aggregating data on behalf of consumers will further change the extent to which data are available for research.

One issue that should be revisited in this new context is the HIPAA deidentification standard.6 HIPAA now includes a deidentification safe harbor, which says in essence that data are deidentified as long as 18 specified data elements are removed and the covered entity does not have actual knowledge that a person could be reidentified from the dataset under the safe harbor. All the demographic data, all dates related to the individual—including date of birth, dates of service, and ZIP Code—and all unique identifiers—such as medical record number—must be removed.

Alternatively, covered entities can use the statistician method, under which a qualified statistician can certify that a dataset is deidentified. At the time the Privacy Rule was drafted, it was believed that a cottage industry of statisticians who were willing and able to certify large deidentified data-sets would emerge. In practice, however, only a handful of statisticians are available to provide these certifications. Although a number of large data aggregators are using statistically deidentified datasets, it is not the industry norm for research enterprises.

Some argue that the deidentification safe harbor is too narrow and that researchers should be able to freely use those data that include some elements on the safe harbor list. However, some privacy advocates argue that in the Internet age, there is no such thing as deidentified data—that because of the widespread availability of electronic information and the ability to aggregate it, personal data cannot ever be deidentified. The resolution of this debate will have profound implications for public health research, epidemiology, and the future of research on large datasets.

Another issue that should be reconsidered is whether individuals should be permitted to authorize the use of information about them for future unspecified research. Today, obtaining a HIPAA authorization for uses and disclosures for future unspecified research is not permitted under the Privacy Rule.7 Privacy advocates argued at the time of the rulemaking that there is no way to adequately inform an individual about the privacy risks related to future unspecified studies. Research stakeholders, however, pointed out that individuals can and have authorized such uses under the Common Rule and that not permitting such authorizations unnecessarily limits and harms the research enterprise.

Another set of issues that needs to be discussed concerns whether liability burdens under the HIPAA Privacy Rule are properly distributed. Although research data involving PHI are held by both HIPAA-covered and -noncovered entities, liability risks reside largely with the HIPAA-covered entities. In addition to confusing rules and administrative record-keeping that many covered entities—smaller hospitals in particular—find unduly burdensome, the effect of these liability risks is that covered data holders, including hospitals, health plans, and other large HIPAA-covered entities with sizable pools of data resist expending resources to create the deidentified data or limited datasets. They see little reason to spend the requisite time and money so that others can have large datasets on which to do research.

Finally, it is also true that HIPAA sometimes provides a convenient excuse for those who simply do not wish to share their data. Many stakeholders, for a variety of reasons, do not want to share. To successfully address the needs of both patients and the research community, policy makers need to understand which barriers regarding data sharing need legislative or regulatory solutions. This can be hard to discern because HIPAA is so often put up as a smoke screen to preclude the sharing of data.

Congress, agencies within HHS, and numerous advisory committees have recognized that the HIPAA research provisions need updating and improvement. As Congress and HHS examine and enhance federal health privacy protections, new opportunities for addressing the needs of the research community will emerge. Those who are ready with concrete and realistic proposals and solutions will be those most likely to succeed.

EXAMPLES FROM OTHER SECTORS

Elliot E. Maxwell, J.D.

Communications Fellow, Johns Hopkins University

This paper will attempt to put the use of healthcare data into the larger context of transforming health care by increasing openness. This means providing more access to more information to more people and allowing individuals to contribute their own expertise and insights to that information.

One of the many examples of increased openness in health care can be seen in the collaborative research model of the Human Genome Project, with results posted immediately, available to the world. Congress has mandated greater openness by requiring the public registration of more clinical trials. New models of disclosure and publication of research results in open-access journals and digital repositories provide greater openness. Greater access to information is transforming the relationship between doctors and patients and is increasing market incentives for improved health care.

Greater openness is not always better because of privacy and security issues. The challenge is to determine what level of openness is most appropriate for the particular purpose to be accomplished. Clearly, greater openness has been made possible through the rise of the Internet and the increasing digitization of information. These two phenomena have helped change how we think about the ability to access information and the ability of millions, even billions, of people to make contributions. The Internet has also led to a new understanding of how to obtain value through the sharing of information.

It was commonly believed that value from information or innovation came through controlling it. By monetizing through licensing or other means, it was critical to control access to the information to obtain value from it. That notion of control underlies current intellectual property laws and is the basis for proprietary models.

Openness may be thought of as a continuum, running from closed to open. At one end are those things completely closed and controlled, such as writing a song, but never sharing the score. A point toward greater openness on the continuum is occupied by proprietary software that one can license under generally restrictive conditions. At the other end, the open end, is the sharing of information with anyone and everyone as seen through virtual posting of nearly anything on the Internet. Granted, someone might request that the material be removed because of intellectual property rules or some other legal restriction, but there is no central control agent that prevents individuals from posting. There are now licensing schemes, such as those that have evolved through the Creative Commons (http://www.creativecommons.org), by which the poster can inform everyone about the restrictions that exist on the use of the posting. These licenses also allow the poster to announce that there are no restrictions—that the information is completely “open.” However, if no restrictions are placed on the use of the information, does it still have value? With the increasing use of the Internet, we are finding that we can obtain great value from sharing information. Wikipedia is an example of the creation of great value via sharing.

Wikipedia is not completely open, but it is much closer to completely open than the proprietary Encyclopedia Britannica. Since its founding several years ago, Wikipedia participants have created five times the number of entries found in Encyclopedia Britannica, which has been under development for a hundred years. A study in Nature found that scientific entries in Wikipedia were substantially equal to those in Britannica. We do not need to equate the two or to argue that Wikipedia should be regarded as a definitive source, but indisputably it has provided great value to millions of people, all based on contributions without any expectation of monetary reward. On the openness continuum, Wikipedia is not completely open. Postings can be taken down under certain circumstances by people trained and empowered to make judgments about, for example, the “neutrality” of a posting.

Similarly, open-source software, which is usually thought of as open because its underlying source code is available without restriction, is not entirely open. A software application like LINUX cannot be entirely open because no one would use an application that would change every time someone suggested an improvement. At the same time, open-source software is licensed in such a way that the source code will be seen by as many people as possible, which is the key factor in its success. It aims for continuous improvement through widespread sharing because such sharing makes it more likely that someone, somewhere, will have the inclination and the expertise to review and improve the code. The more open it is, the more likely it is to get better; however, it is not completely open because any new version of LINUX will not be released until a group of experienced coders exercise their judgment and determine that that version is ready for prime time.

The success of Wikipedia and open-source software demonstrate the power of the Internet and how value can be added by sharing rather than by exercising strict control. This success also reveals how openness allows value to be obtained from unexpected sources. At the same time, openness invites negative contributions as well as positive. Contributions are welcomed from experts, but also from a broad range of people because of the assumptions that many people can add value, but how many cannot be determined ahead of time. Another fundamental assumption is that the value of contributions from unexpected sources outweighs the cost of screening out contributions that do not add value.

This orientation toward facilitating contributions from a broad range of individuals and organizations underlies the development of what some have called Web 2.0. In the early days of the World Wide Web, it was equated with the great library at Alexandria—it would provide access to a vast store of information to anyone with an Internet connection. That was great—but passive in that little thought was given to what those people could contribute. How could they modify it, what could they do to improve it based on their own expertise and experience? The degree to which people can modify the information or the process determines how responsive the information or process is. Openness is about both accessibility and responsiveness.

As noted earlier, the greatest degree of openness is not always the best answer to the question of what degree of openness is best suited for a particular purpose. This is where we can begin to ask salient questions regarding health care. What kind of information do I need for this purpose, whether it is research or treatment or the detection of emergent diseases? Who should have access to it? Under what terms and conditions? Should I allow other people to contribute to it? Should everyone be able to contribute, or only those prequalified in some way? Can they modify, repurpose, or redistribute it, and if so, under what terms and conditions, if any?

The report entitled Harnessing Openness to Transform American Health Care, recently released by the Committee for Economic Development (CED) examines the terms and conditions under which greater openness might improve health care across its entire production function, from research to treatment (CED, 2008). The report also considers cases in which greater openness can be harmful—such as unauthorized access to medical records or unauthorized disclosure of genetic information about an individual—and destabilizing such as in the relationship between patients and their caregivers. The report is not exhaustive, but it is a first attempt to use the lens of openness in an area rich with opportunities for improvement.

The report is about openness rather than the use of information and communications technology (ICT) in health care because, ultimately, openness is about an attitude; openness is about a predisposition for giving more people more access to information and more opportunities to contribute based on their own expertise and insight. There are scores of very good reports on the value of increasing the use of ICT in health care, but that should not be equated with increasing openness.

A wonderful example of openness in health care has nothing to do with information and communications technology. A researcher at the University of California–Los Angeles has bush hunters in Cameroon send him samples of diseased animals they have caught. He then examines them for evidence of emerging diseases. That is about as far from HIT as you can get. But it is based on the recognition that great value can be obtained from unexpected sources—in this case the hunters add great value because they are operating on the front line of emergent diseases.

There are many other examples of openness, some of which are based on the use of information technology, while also demonstrating the importance of an attitude of welcoming contributions from others. Some of the most interesting examples come from interactions between caregivers and patients.

About half of primary care physicians report that their patients have arrived with research from the Internet. What does that tell us about openness? Obviously information technology has provided patients with greater access to information—some of it valuable, much of it wrong or inapplicable. What should caregivers do? They can elect to dismiss the Internet research out of hand or imply that valid information can come only from the doctor. They can treat such circumstances as a learning opportunity, educating patients to separate good research from bad. They might even find research that is new to them. In this context, openness is destabilizing the traditional doctor-patient relationship, but the end results may be more informed patients who can take more responsibility for their own health, and new and rewarding partnerships between caregivers and their patients.

As the report touches on many areas, this will focus on the openness issues surrounding clinical data and electronic health records. Many kinds of information would be valuable for patients that are not available now. For example, it would be useful for patients whose caregivers recruit them for clinical trials to know whether the caregiver is being paid to recruit; similarly, it would be valuable for patients to know whether the caregiver has a financial interest in the treatment being recommended or is receiving gifts from a pharmaceutical firm whose product is being prescribed. In such cases more openness and more information would allow patients to make more informed decisions.

Congress acted recently to increase openness regarding clinical trials and posttrial surveillance. For drug trials that lead to an application for approval by the FDA, there is no compelling argument for disclosure before the application. On the other hand, data indicating safety issues that lead to a trial’s termination should be made available immediately so that others do not repeat the trial and put trial participants at risk unnecessarily.

Currently, when an application for approval is filed, the information associated with the application can be protected as a “trade secret.” It is not available at the time of the application, nor when the application is approved; it can be withheld for an extended period of time beyond approval based on its trade secret characterization. Moreover, once it is submitted it falls into a kind of regulatory black hole where the FDA makes no affirmative effort to make it available to researchers who might benefit from access to it. Even if the data are in an electronic form, they may not be arrayed in a manner that would allow other researchers to aggregate and manipulate them and to use them to develop more comprehensive databases.

There does not appear to be any compelling reason to withhold these data after a drug or other intervention has been approved. Companies have argued that the data should not be made available at all because doing so would provide a shortcut for competitors, but the company that submitted the data has had a large head start over any competitor because it has had years to scrutinize the data. The FDA approval has provided the company with a substantial legal benefit. On the other hand, withholding the data prevents academic researchers interested in the efficacy and safety of the intervention from benefiting from the data. Access to data underlying clinical trials does raise important questions of openness, including the value of the data to the company that submits the data and to competitors. But in making a decision about disclosure, the most important criterion should not be the impact on competition between drug-producing companies, but on the societal value of providing the information to researchers in general.

The specific recommendations made in the CED report regarding clinical trials and postapproval surveillance include the following:

  • The FDA should review existing requirements on patient consent to participate in clinical trials and make changes as appropriate. The bifurcated authority in this area should be ended.
  • Those recruiting participants for clinical trials should be required to disclose any financial interest in the recruitment.
  • The FDA should require electronic filing for all drug and device approvals.
  • The FDA should set standards for and require the filing of underlying clinical data, on approval, in a form that allows subsequent machine aggregation, search, and manipulation.
  • The FDA should require the filing of all studies that an applicant has commissioned on a drug or device that is being submitted for approval, whether or not the study was commissioned as part of the application.
  • The FDA should consider making public any studies that it conducts in the course of a drug or device approval.
  • Those conducting clinical trials should be required to report to the FDA, on detection, any instances that would reasonably suggest the use of fraudulent data.
  • The FDA should require disclosure of any limitations on researchers’ ability to comment on clinical trials with which they are involved.
  • The FDA should broaden the means by which postapproval adverse events can be reported and should make the reports more widely available.
  • The FDA should encourage the disclosure of postapproval data indicating the efficacy of interventions for nonapproved purposes.
  • The federal government should dramatically increase its efforts to directly compare the safety and efficacy of similar drugs and devices.

The report also deals with openness issues involving electronic health records. As with the data underlying clinical trials, data from EHRs are likely to be critical components of large databases that will serve as the breeding grounds for development of evidence-based medicine. The active mining of these large databases, whose development has been encouraged by the latest amendments regarding the FDA, should expand the number of medical practices that can claim an empirical base. At the same time, the CED report noted the need for disclosure of conflicts of interest by anyone participating in developing recommendations for clinical practice regimens.

The countervailing interests of privacy and security are evident in any consideration of openness and EHRs. Patients want their EHRs to be open enough to be accessible to anyone treating them, and open enough to receive data regarding appointments, prescriptions, treatments, tests results, etc., but not so open as to allow anyone to have access to them without authorization. It will be critical to resolve these tensions in a way that enlists patient support for the development of EHRs. Development of an interoperable system of EHRs is stymied now by a lack of standards, a lack of incentives for the predominantly small medical practices to adopt them, and a lack of demand from patients due to concerns about privacy and security.

Under today’s HIPAA rules, there is a presumption toward openness in that a patient has a right to a copy of his or her records. But, as with filings regarding clinical trials to the FDA for drug approvals, there is no requirement that these records be in a digital form. The CED made several recommendations regarding EHRs:

  • Individuals and groups providing and funding health care should institute appropriate incentives for the adoption of information and communications technologies (including EHRs) to reduce health care’s burdensome administrative costs.
  • Federal research agencies should increase their support for the development of the large databases necessary for progress toward evidence-based medicine, including developing the necessary data standards.
  • Strict requirements on the disclosure of conflicts of interest should be applied to those participating in the development of recommended clinical practice regimens.
  • HIPAA should be amended to require that those parties who hold a patient’s medical records must provide the patient with the opportunity to receive copies of those records pursuant to HIPAA in digital form.

Concerns about privacy and security are among the principal impediments for the development of an interoperable system of EHRs. But although there is much debate about how to deal with privacy and security, both technological and marketplace forces are racing ahead, rendering HIPAA’s privacy regime increasingly problematic. For example, there are more than 200 different systems of personal health records now in the marketplace, including Microsoft’s HealthVault. Yet because Microsoft is not a medical provider, the company is arguably not covered by HIPAA’s privacy requirements.

Clearly some HIPAA rules should apply. Perhaps they will be covered by an authority based on the Federal Trade Commission’s jurisdiction over advertising of privacy, protections, or some other regulation that makes any entity that touches personally identifiable health data a steward of such data, with some enforceable responsibilities. Ensuring that entities that have this sort of information are covered and that the rules governing their responsibilities and obligations are clear will be of ongoing importance.

The need for clarity in the rules is often overlooked. One of the principal lessons from the failure to share information regarding the Virginia Tech shooter was that the individuals and institutions that had relevant information did not understand what they could do based on existing privacy rules—so they too often chose inaction as the safest response. That should not be the case. If the future of health care involves a far richer data environment, as I believe it must, we will need clarity in the rules regarding privacy and continuing educational efforts about what is and is not allowed.

Another lesson from EHRs is that there is a high level of anxiety and disquiet about privacy because there are no generalized privacy protections in the United States. Many individuals do not believe the environment is structured to protect their privacy. In this country we have tended to address privacy issues in “silos.” We tend to identify particular information or particular technologies and address their privacy implications, rather than looking more broadly at privacy interests and how to promote them. Individuals are actually expected to understand the different privacy regimes of different domains.

There is yet another issue as to whether the respective rules are being enforced. One of the reasons for the log jam about EHRs is the belief that enforcement of the HIPAA Privacy Rule is nearly nonexistent. The people who are supposed to be protected must believe that they are, and that bad actors will face consequences. In yet another example of the power of openness, one needs access to the protected records to determine if there has been wrongdoing, such as unauthorized distribution of protected information. Ensuring access to one’s records has been a fundamental part of privacy law for the past 25 years. The Freedom of Information Act practices require that you have the right to know what information is held about you and what has been done to it, and that you have a right to correct mistakes in the records. We do not have that today in a meaningful way with respect to medical records—and we should.

How to break the log jam regarding privacy and security is unclear, but here is a modest proposal. It will be challenging to develop the kinds of databases needed to provide evidence-based medicine unless there is societal agreement about the level of required protection for privacy and security. One part of the protection is the deidentification of patient records. The questions of how to deidentify the records and what level of protection is required are not something that seems particularly amenable to congressional resolution. Perhaps Congress should commission The National Academies to formulate recommendations for the rules regarding deidentification within 18 months. The Academies would be told to use their judgment to make the best recommendations technologically, economically, and ethically. Such recommendations would, on their own, be useful, but we could take it a step further and have Congress treat these recommendations as they did recommendations from the military base-closing commission by making them subject to an up or a down vote. This is not the most elegant solution or one that is consistent with what we learned about civics in high school, but we need to resolve these issues in order to obtain the benefits of greater openness, particularly those related to the use of clinical data to develop more evidence-based medicine. We need to cut through this Gordian knot. This may not be the right way, but at least it is a way of dealing with these issues.

Without transparency, without clear rules, without some reasonable expectation of enforcement, there will continue to be great reluctance on the part of many people of good will to allow clinical data to be used to improve the general provision of health care. The benefits seem too far away, the threats too real and immediate. We will need to address both benefits and risks in order to foster a more open system. The CED report is an attempt to show the benefits, but it is only a beginning.

INSTITUTIONAL AND TECHNICAL APPROACHES TO ENSURING PRIVACY AND SECURITY OF CLINICAL DATA

Alexander D. Eremia, J.D., LL.M.

Associate General Counsel and Corporate Privacy Officer, MedStar Health, Inc.

Healthcare providers have a duty to protect and secure the health information they receive or generate relating to their patients. Including their professional and ethical obligations, healthcare providers are now subject to a wide range of state and federal laws that impose various requirements and standards for the protection of health information. In addition, evidence suggests that most patients do not want their private health information to be: (1) accessed by people who do not need to see it; (2) used for purposes that will not benefit the patient; or (3) disclosed to someone who is not required to protect it or who might use it in a harmful manner (Westin, 2008). As a result, many healthcare providers view the protection and security of their patient’s health information as essential to maintaining the trust and confidence of their patients and an important element of patient satisfaction. At the same time, healthcare providers are rich sources of data, which when properly used in research, have the potential to greatly enhance the quality of clinical care and may result in better clinical outcomes, improved efficiencies, cost savings, or other medical advances.

Healthcare providers have an interest in each of these goals, but perceived and actual privacy or security hurdles, patient trust considerations, potential legal consequences, and actual costs associated with retrieval of data pose barriers to releasing data for research purposes. In particular, healthcare providers often find the privacy and security requirements of HIPAA confusing, and health information data custodians and researchers sometimes have limited awareness of HIPAA’s data access and disclosure requirements.

Furthermore, even when access and disclosure are permitted under HIPAA, minimum necessary standards, accounting for disclosure obligations, and other patient considerations may impede the willingness to make certain disclosures of identifiable information. In addition, it is often costly for healthcare providers to divert resources and personnel away from their primary clinical care activities to attend to administering system and records access/disclosure activities for research purposes. Although technological solutions have the potential to mitigate some of these costs and resource burdens, at the current time, few such tools adequately address all of a healthcare provider’s privacy requirements. In fact, often the implementation of new information technology brings with it additional complexities with respect to the ability to properly control research-related access.

As a result, healthcare providers are often more motivated to protect patient privacy, to respect physician-patient relationships, to minimize the administrative impact on data retrieval, and to minimize legal risks and customer complaints than they are to accommodate the needs of researchers. Absent adequate financial or strategic incentives, regulatory amendment, and greater appreciation of the public benefits of research, access to identifiable data for research will remain a challenge.

MedStar Health is the largest provider of healthcare services in the mid-Atlantic area, composed of eight hospitals, including community-based hospitals and academic medical centers, as well as numerous satellite clinics and outpatient facilities. In the District of Columbia, we own and operate Georgetown University Hospital, the National Rehabilitation Hospital, and Washington Hospital Center. Collectively, our system has about 25,000 employees and at least 5,000 affiliated physicians. System wide, we annually serve some 158,000 individual inpatients, have 787,000 inpatient days, treat 1,561,000 individuals on an outpatient basis, and make 208,000 home health visits. Therefore, the MedStar Health community is a rich source of diverse data that are potentially of great use to research. In that context, this paper will reflect on some of the institutional challenges that we have balancing patient privacy interests with providing access for research purposes.

At MedStar Health we have a vision of being the “trusted leader in caring for people and advancing health,” and we have long had a commitment and philosophy of putting the “patient first.” As a result, our leadership feels strongly that beyond what the law says, we are devoted to protecting the interests of our patients and their information, and we are committed to promoting the trust of our patients by protecting their privacy. At the same time, we have a strong commitment to innovation, the promotion of research, and a shared vision of “advancing health” though our education, technology, and research capabilities.

As a part of MedStar’s operations, we regularly create and maintain a number of databases and record sets into which patient information is placed, processed, and stored. Given the wide range of services provided by MedStar Health and the diverse patient base we serve, both the volume and the variety of data within these resources are large. We therefore are approached on a regular basis by researchers outside of our covered entity who request both large data sets as well as ongoing open access to patient information.

One area of significant concern, therefore, is how to most appropriately release and provide access to information to researchers who are not members of our workforce. Because these databases, repositories, and record sets are usually created primarily for treatment, healthcare operational purposes, or billing and financial purposes—not for research purposes—they often lack a built-in framework for addressing the needs and requirements associated with research-related access as well as the obligations we have for research-related disclosures. Furthermore, even when record sets are created in anticipation of potential research, they are often not designed to adequately facilitate compliance with privacy and security requirements.

Consistent with trends across the healthcare industry, MedStar is in the process of transitioning from being a largely paper-based organization to one with electronic records. We actually have four or five separate unique, stand-alone, traditional electronic medical records. In addition, MedStar has also developed a product, which was ultimately bought by Microsoft, that aggregates data from disparate systems, and which has led to an ongoing development relationship between MedStar Health and Microsoft. Although these systems have greatly facilitated our healthcare activities in many ways, one of the largest challenges we have with respect to research interests is getting information out of these systems in a cost-effective format that is useful to researchers.

Data that we collect, capture, and hold—whether in electronic or paper format—are generally meant for our own internal operational and clinical purposes, and often are not easily retrievable in a format that is usable for research purposes. Even when the data are in electronic format, in many cases the way in which information is accessed and used for normal operations purposes (e.g., the types of queries made, the specific data points that need to be viewed, and the actual ways in which the data will be employed after retrieval) differs greatly from the manner in which researchers wish to interact with it. Because these systems have generally been designed and implemented with operational needs in mind, as opposed to the needs of researchers, the sort of retrieval, aggregation, and analysis tools necessary to researchers are often not readily available within these systems. As a result, often the extraction of data from our electronic systems requires a fairly manual and laborious manipulation process. To optimally meet researchers’ needs and to contain costs for covered entities would, for some systems, require the development of new software interfaces and tools, all of which require investments of time and resources.

Furthermore, some of the information that MedStar collects or creates is proprietary information that we are unwilling to share in unfiltered/unredacted format, if at all. For example, we often get research requests for our billing and coding information. Although it may be possible to remove such confidential, proprietary information from a dataset intended for research use, it can be difficult to dissociate this information from what we are willing to share. In many cases this removal would require intelligent software capable of making fine-grained discriminations and would be very costly.

Moreover, concerns are sometimes raised by healthcare administrators that the goals of research are incompatible with the goals of being a leading community-based healthcare provider. Some of the goals of research—such as furthering scientific progress, translating research into improved clinical care, improving society, maintaining scientific integrity, perhaps pursuing technology transfer opportunities—are obviously all valuable, and no one denies that having appropriate information available to use for research is a public good. Nonetheless, such goals can sometimes run counter to the immediate goals of healthcare providers, which are fundamentally to provide quality health care to patients that results in high levels of satisfaction, trust, and confidence and to do this all on increasingly slim operational margins. For many healthcare providers, these goals (or at least the processes involved in achieving these goals) appear incompatible.

Beyond logistical barriers and differences in goals, moreover, HIPAA poses further obstacles to sharing information to outside parties for research purposes. The Privacy Rule continues to be confusing to many healthcare providers, who often view its requirements as arbitrary and overly complex. Healthcare administrators often face the burden of too many forms and policies that are generated as a result of our responsibilities to protect patient privacy. Our administrators complain that they have inadequate resources to review requests and to assist in providing requested information, and this potentially results in reduced access to records and data. Furthermore, as with any large workforce, we experience frequent staff turnover, which results in a continual challenge of adequately educating our administrators and record custodians about how and when they can appropriately release health information for research-related activities. Similarly, often researchers and their staff do not understand or fully appreciate the requirements that we must fulfill with respect to the control of health information or the complex documentation requirements relating to the release of health information.

As an example, although researchers may understand that in order to review record sets for the purpose of identifying prospective participants, they must obtain a waiver of the requirement for authorization from an IRB, but they are rarely aware (or may not be concerned) that any access to PHI by non-MedStar research personnel that occurs under this waiver triggers an accounting of disclosure requirement on the part of the records custodian. As a result, they may not take proper steps to ensure that they limit the scope of their requests, limit which other persons receive the screening information, or adequately notify the records custodian of the involvement of external personnel and take steps to facilitate our accounting requirements.

Although IRBs can play an important role in ensuring that researchers properly address not just their own privacy requirements, but those of the information provider(s), IRBs need not be affiliated with the Covered Entity to grant a waiver of the authorization requirement and may not be entirely concerned with the Covered Entity’s obligations. In addition, many IRBs also have regular turnover and have many members, including unaffiliated community representatives, who sometimes do not understand the requirements for protecting patient privacy. These issues of affiliation and education—whether it is of staff members, researchers, or IRB members—add to overall concerns in maintaining the trust placed in us by our patients.

To provide a few concrete examples of the challenges posed by the intersection of privacy concerns and research interests, I would like to focus briefly on a few specific issues that MedStar has encountered: (1) the need to adhere to different standards depending on who is requesting the information; (2) the potential for needing to honor patient restrictions; and (3) issues related to the relationship between an individual physician and his or her patient and the hospital in which the physician practices. These cases illustrate some of the difficulties inherent in trying to bridge the tensions between these two interests.

As touched on briefly above, depending on the specific relationship between an individual researcher and the Covered Entity whose health information he or she wishes to access, HIPAA requirements associated with access differ. Although researchers are permitted to access PHI for research purposes without an authorization under the Privacy Rule, any time a researcher who is not a member of that entity’s workforce does so, it is considered to be a disclosure that the entity must track and be able to account for on request. This is extremely burdensome for healthcare providers, particularly in the paper world, and often necessitates physically placing a marker or informational sheet in each record accessed. One might think this would be easier in an electronic world, but in reality it is not! Most of our electronic systems (especially our billing and other operational systems) do not include functionality that allows the adequate tracking of these disclosures with the level of associated information and detail required by the Privacy Rule.

HIPAA’s alternative accounting mechanism, which provides for group or bulk accounting in cases where more than 50 disclosures are made for an individual study, is not really a viable alternative for a large decentralized and integrated healthcare organization. Without a central clearinghouse for evaluating data requests and/or registering the individual studies for which requests are made, it is difficult to confirm which studies may have had information released. For instance, for appropriate clinical efficiencies, some of our clinical systems allow physicians to access health information regardless of where the patients were seen in our system. As a result, it is possible that a researcher in Baltimore could request and access patient information from a system accessible at their location (i.e., in one of the Baltimore facilities) relating to information on a patient who was not seen in that Baltimore facility. Consequently, if a patient from the non-Baltimore facility requested an accounting of disclosures, it may be challenging to determine whether an accountable disclosure was made by the Baltimore facility. Dealing with this situation effectively would require the centralization of all research and other requests, so that all requests are handled by one central administrator. Unfortunately, this would be extremely burdensome and not currently a viable option for us because we have received potentially thousands of separate requests from thousands of different studies, resulting in hundreds of thousands of research-related disclosures over the course of the prior 6 years.

The issue of accounting for disclosures is one where researchers themselves could do much to help institutions in meeting the burdens associated with their privacy requirements and, in so doing, increase institutions’ willingness to provide information for research purposes. Among the ways this can be done are: (1) developing or subsidizing the development of disclosure tracking software; (2) subsidizing staff positions dedicated to meeting accounting requirements (records custodians are often severely overworked and unable to shoulder this); and (3) personally providing required information sheets or disclosure data where necessary. These strategies and “unforeseen” costs associated with data screening and recruitment should be considered by researchers when calculating the costs of conducting research at the time of grant application or protocol development.

Another difficulty associated with the research use of patient information involves the potential of “patient restrictions” placed on the use or disclosure of their own health information. Under the Privacy Rule, patients are permitted to request a restriction on how their health information may be used or disclosed. However, the Privacy Rule does not require a Covered Entity to accept that restriction request and, in fact, most health-care providers try not to, because it is extremely burdensome to honor these requests. Even if such restrictions are accepted, healthcare providers are not necessarily culpable under HIPAA if the release of information is for research purposes.8 Nonetheless, we believe that if we make a commitment to our patients, we are ethically obligated to try to fulfill it.

Though most Notices of Privacy Practices require that any request for restrictions be placed in writing and though most Covered Entities try to educate their staff to not accept a restriction unless it is in writing and clearly agreed to, it is possible that physicians or other staff members occasionally and informally make commitments and promises to their patients that their health information will not be used for any purposes except their own treatment unless the patients otherwise consent. In some cases, the physician or staff member may actually sequester a file or flag in an attempt to limit access to the information. Unfortunately, because billing systems, registration systems, and other clinical systems are often highly integrated, it is often difficult for healthcare providers to completely restrict who accesses and uses the patient’s identifiable health information. When the patient is contacted by an outside researcher (even if the researcher legally and properly obtained the patient’s information), the patient will obviously feel betrayed and lose confidence in his or her healthcare provider.

Given the number of employees that can potentially access any given patient’s records, it is difficult to ensure that a pledged restriction made by one staff member or physician is known and adhered to by others. This issue, furthermore, is inherently resistant to a centralized solution because of the individual nature of the patient–provider relationship. Even with a centralized office for accepting and implementing patient restrictions in place, it would not prevent individual physicians from making personal agreements or commitments with patients that do not get propagated across the system. This challenge is, similarly, more difficult for researchers themselves to help mitigate than, for example, the accounting of disclosures requirement because the researcher has little ability to discern where restrictions may be in place if they have not been adequately marked by those who accepted the restriction. As a result, completely confirming that healthcare providers are not violating any individualized commitments prior to making a research-related disclosure would literally require confirming such with each individual treating provider (obviously an insurmountably burdensome task).

Finally, another example of a problematic barrier has to do with the physician–patient relationship. A small but vocal community of our physicians has strongly objected to us allowing health information about “their” patients to be accessed by researchers. Some feel strongly that researchers are effectively trying to “cherry pick” their patients because some of these researchers are also clinicians. They have also argued that this violates the trust of their patients because patients may not understand why some outside researcher with whom they have no existing relationship is contacting them. It is argued that this may be perceived as similar to providing their contact information for “cold-calling” purposes. An additional concern is that these patients might get enrolled in trials that contraindicate the care their personal physician advocates. In fact, these objections run so deeply in some cases that some referring physicians have suggested, “If you do not protect my patients’ information, I am not going to refer patients to your hospitals any longer.”

This, again, is an area in which researchers can play a personal role in mitigating concerns. If researchers are prepared to engage in meaningful discussion with treating physicians about the value and benefit of proposed research and accept the expressed concerns, they can help to work around these potential barriers. For instance, rather than screening patient records without the knowledge of treating physicians and contacting patients themselves, researchers can work with physicians to identify potentially eligible patients and then ask the physicians to speak with them about the proposed research. This can alleviate both the potentially invasive feeling by patients of being contacted by a stranger for research as well as physicians’ concerns that their patients may be recruited without the physicians’ knowledge into research that they do not believe is commensurate with the care they provide.

Patient attitudes also play a key role in determining whether health information can or should be released for research purposes. Some patients are altruistic and have no difficulty sharing all their identifiable health information if it will better serve the community. Others are much more protective of their individual information because of fears over misuse, discrimination, or social stigma. Some patients are comfortable releasing some, but not all, of their health information for research purposes. However, although this could be a means of balancing privacy interests against research interests, many researchers do not view this as an effective option because it potentially distorts the available data sources and could skew data results. Moreover, even in an electronic world, technical limitations can function as barriers to even this limited type of research access. As discussed above, many systems do not have built-in abilities to easily capture data in a format useful for research purposes. Additionally, many systems lack the functionality that would be necessary to allow a patient to partially opt out of disclosures for research purposes (e.g., portions related to mental health or substance abuse).

Unfortunately, most healthcare providers have no cost-effective way of protecting just limited portions of the patient record, even when individuals feels comfortable that the rest of their file could be used for research purposes. Eventually, we may get to a point where we can make such distinctions, but for now such requests put us in the untenable position as a Covered Entity of having to assume the burden and cost of basically pulling records or reports, reviewing eligibility criteria, and spending the necessary time to compile all this information to be used for research purposes. Under such circumstances, many administrators legitimately question what benefit these burdens provide to our patients and to our institutions. If researchers are willing to expend the time, efforts, and costs necessary to enhance these systems to better meet these needs, they can potentially go a long way toward increasing institutional support for research disclosures.

All of this is not to suggest that healthcare providers do not have any commitment to research at all. Healthcare providers recognize the value and the public good of research. They are committed to research, especially when it is consistent with their own mission or values or when there is a direct benefit to them. Obviously, however, they do not want it to interfere with patient care. They do not want it to be overly burdensome or costly, thereby detracting from the resources available for activities more directly related to patient care. They do not want it to interfere with their relationships with their physicians or the relationships between the patients and the physicians. In addition, of course, all healthcare providers have to be concerned about legal risks and compliance with applicable laws.

Recognizing the importance of research in furthering the practice of health care, and in improving society as a whole, MedStar has undertaken a number of different efforts to try to accommodate researchers in a fashion that balances our privacy concerns against the administrative burdens associated with research-related requests. One thing we have done is agree in some cases to effectively perform screening and recruiting activities on behalf of researchers. This includes screening participants—assuming there are no objections from patients or physicians—in order to obtain authorization on behalf of the researcher or to simply provide the subject with information about the research project and let him or her contact the researcher directly. In theory, this avoids the accounting obligation, and could be more sensitive to some of our patients, but it requires time, training, and effort on our staff. In some cases, we have asked researchers for compensation to offset some of those costs. This is obviously not preferred by all researchers, but it is a step toward closer engagement between us, as a healthcare provider, and the research community as we work to foster coordinated EHR user organization evidence development work.

Another approach we have tried with limited success is to engage the researcher effectively as a business associate to handle all the screening, recruitment, and internal administrative processes that we have in place. This allows the researcher to recruit patients directly, but it avoids the accounting of disclosure obligations and shifts the burden to the researcher for cost. Depending on the HIPAA mechanism the researcher is using, the PHI may need to remain within our property as a Covered Entity. If the researcher does not obtain an authorization, the PHI would need to be returned or destroyed. This solution, unfortunately, is not appropriate or viable in all situations and, again, is not always palatable to researchers themselves.

Other data access options include Limited Dataset/Data Use Agreements. This option would generally permit researchers to have a limited set of identifiable health information, without a patient authorization and without the accounting of disclosures responsibility, but it still requires resources of the Covered Entity to create the Limited Dataset and to negotiate the Data Use Agreement with the researcher. Our experience has shown that this option currently has limited effectiveness for the majority of research conducted at our facilities because most of our research requests are for more complete, identifiable datasets. As a result, the Limited Dataset will not be a truly useful or viable option for us absent systems that can cost-effectively produce the data and absent an amendment to the Privacy Rule that greatly expands the number of identifiers available through this vehicle.

Going forward, we see several potential avenues for progress. We would like to see HIPAA amended to accommodate the needs of researchers while minimizing the burdens on Covered Entities. Eliminating the accounting disclosure obligations would go a long way toward reducing our costs and burdens. Expansion of the Limited Dataset concept could potentially assist both researchers and Covered Entities if the Covered Entity has systems that can cost-effectively produce data and the Limited Dataset vehicle is greatly expanded to include identifiers that would permit screening and recruitment activities.

In addition, as vendors and suppliers of our data systems and electronic medical records systems become more sophisticated in the potential applications of this information, the design of operational databases and electronic records will allow us to more generally protect the patient information that needs to be protected due to applicable laws or commitments to our patients, while making available information that can and should be available for research purposes. With respect to the physician–patient relationship, continued work is necessary to build communication and trust between all parties, and opportunities exist to further educate treating physicians about research opportunities. With respect to technology, interoperable data exchange may ease some of the technological burdens we face and could result in greater access to health information by researchers, but the details and potential barriers associated with access to data exchanges remain uncertain and may require further legal clarifications. Perhaps most importantly, an increased awareness and sensitivity on the part of researchers to the requirements, burdens, and costs associated with healthcare providers’ provision of information, and a willingness to share in those costs and burdens, can greatly aid in overcoming the obstacles that currently impede research efforts.

REFERENCES

  1. CED (Committee for Economic Development). Harnessing openness to transform American health care. 2008. [accessed August 21, 2008]. http://www​.ced.org/docs​/report/report_healthcare2007dcc.pdf.
  2. Westin AE. How the public views privacy and health research: Results of a national survey commissioned by the Institute of Medicine Committee on Health Research and the Privacy of Health Information: The HIPAA Privacy Rule. 2008. [accessed August 20, 2008]. http://www​.patientprivacyrights​.org/site​/DocServer/Westin_IOM_Srvy_Rept_2008_​.doc?docID=3181.

Footnotes

1

This survey was commissioned by the Institute of Medicine as part of the work of the IOM Committee on Health Research and the Privacy of Health Information.

2

67 Fed. Reg. 14776, 14793 (Mar. 27, 2002).

3

45 C.F.R. § 101.

4

45 C.F.R. § 165.514(e).

5

45 C.F.R. § 164.508(c)(1).

6

45 C.F.R. § 164.514(a).

7

45 C.F.R. § 164.508.

8

45 C.F.R. 164.522(a)(1)(v).

Copyright © 2010, National Academy of Sciences.
Bookshelf ID: NBK54293

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (3.1M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...