TRANSCRIPT, Meeting 9, Session 6

Date

May 17, 2012

Location

Washington, DC

Presenters

Madison Powers, J.D., Ph.D. 

Professor, Department of Philosophy, Senior Research Scholar, Kennedy Institute of Ethics, Georgetown University

Amy McGuire, J.D., Ph.D.

Associate Professor of Medicine and Medical Ethics, Associate Director of Research, Center for Medical Ethics and Health Policy, Baylor College of Medicine

Download the Transcript

Transcript

Dr. Wagner: We're switching gears to an entirely different project.  And that is the work we've been doing on privacy and access to whole genome sequence data.  We began this last year, had another effort, continued our deliberations in San Francisco in February, and today we're going to revisit the topic of privacy and start a discussion about the right to privacy in both theory and practice.

                        Dr. Madison Powers is Professor of Philosophy and Senior Research Scholar at the Kennedy Institute of Ethics at Georgetown.  Dr. Powers' primary research interests include political philosophy and practical ethics and the intersection of bioethics and political morality.  Dr. Powers has served for many years as a member of the Chair of the National Academy Committee of the Robert Wood Johnson ‑‑ excuse me, National Advisory Committee of the Robert Wood Johnson Foundation, overseeing the Investigator Awards Program, and he has participated in many private and governmental advisory bodies, including the Recombinant DNA Advisory Committee for the NIH.

                        Dr. Powers, thank you for being here.

 

                        DR. POWERS:  Thank you for inviting me.  I should probably say a word of pity for myself.  I'm charged with the task of bringing coals to Newcastle; so that's a bit of a difficult task when I see Anita Allen sitting here.

                        I was asked to make a few remarks about how philosophers understand privacy rights.  On the one hand, that's a pretty large charge for ten minutes.  And given the fact that philosophers don't understand anything in the same way exactly, I'm going to make some generalizations that might be a little bit controversial.

                        But the first thing to note is this, I think, that there's a great deal of overlap in the ways in which philosophers and legal theorists conceive of rights in general and privacy rights in particular.  The two disciplines are, to say the last, in constant conversation around these kind of subjects.

                        While both camps emphasize the importance of distinguishing between legal rights and moral rights, there is an extraordinary degree of theoretical cross-fertilization.  So that's natural.  Lawyers and philosophers share a common normative vocabulary.  They focus on the correlative relation between rights and associated duties imposed on others and both disciplines are concerned with questions of ultimate justification, the normative grounds, or values if you will, upon which rights claims are plausibly asserted.

                        Many in both camps follow the broad path, very broad path, set it place by J.S. Mill.  Mill analyzed moral rights as moral claims of special urgency.  They are to be understood as claims against other individuals, but also ultimately against the state for a strong measure of legally enforceable protection against threats to vital human interest.

                        Now what counts as a vital human interest is obviously contestable.  But at the heart of such disputes is an issue of whether some interest is so important to a person that the prospects for a decent human life is compromised greatly by the absence of a secure and effective social mechanism for their protection.  Both lawyers and philosophers are concerned as well with the grounds of the defeasibility, or the reasons that are sufficient to justify overwriting the very substantial threshold rights claims necessarily have over other moral considerations.

                        While some argue that some rights are of absolute or near‑absolute weight, including privacy rights, and thus indefeasible, that probably represents what I take to be a rather minority view in both camps these days.  In terms of privacy rights, most philosophers tend to be privacy pragmatists, as Alan Westin famously labeled them, while privacy fundamentalists, those who insist on their absolute character are, I think, increasingly rare.

                        That said, even privacy pragmatists tend to be impressed fairly heavily by the range of vital human interest that, under certain conditions, justify a range of privacy rights and which justify their having substantial threshold weight when balanced against a wide range of competing concerns, including societal interest.  That a vital human interest is at stake is but the first step in any argument about the existence of rights.

                        We need more information regarding the threat side of the equation.  An almost canonical short-hand expression of how rights claims function with moral discourse is Henry Hughes' conception of how ‑‑ of rights as justified claims that everyone has against everyone else, and in particular, following Mill, against the state for a reasonably secure system of protection against what he calls standard threats to our most vital interest.  Standard threats are ones that arise in the ordinary course of life within a particular set of social and economic arrangements.  They're highly contextual.

                        Moreover, they include only threats that cannot be addressed adequately without recourse to legally enforceable rights schemes.  And the kinds of threats that are the subject of rights protection include only those that can be implemented feasible and without great loss to morally comparable interest of others.  So a number of criteria to sort of narrow the range of where a right would be an eligible instance for a standard threat.

                        Now one implication of this formulation is that some of the moral rights we have are not timeless.  They're not moral claims of comparable rate of all times and all places and under all circumstances.  Human vulnerabilities with respect to their most vital interests can be greater under some forms of social organization than others.  And the upshot is that the set of justified privacy rights might expand or contract as circumstances change.

                        The proper response to threats to vital human interest might be to increase the legally enforceable privacy protections for individuals, or alternatively the ‑‑ alter the institutional context which makes certain kinds of threats to harmful uses of information more likely.

                        Broadly restrictive privacy rights in genetics and medicine, for example, might have special value under economic arrangements in which one's livelihood, access to healthcare, for example, is put at grave risk with a disclosure of genetic information.  A change in institutional rules that would reduce the adverse consequences of certain disclosures might lessen or even eliminate the primary moral rationale for some of the more sweeping privacy rights that would be necessary under less hospitable economic and social arrangements.

                        Institutional changes, however, may or may not address all the concerns that privacy advocates have in mind.  There was a wide consensus that any privacy right claim must rest on one or more of any number of distinct justificatory grounds.  There is not one single type of interest to which justification of the entire set of privacy rights can be reduced.

                        Often the strongest case for recognition of a privacy right claim occurs when a multitude of important interests are threatened, and no other institutional solution, apart from a regime of legally enforceable rights, will be feasible or efficient enough in securing the degree of protection those interests warrant.

                        So consider some of the interests most commonly asserted as candidate grounds for justification for moral rights to privacy.  Some privacy rights are necessary to preserve a domain of unforced deliberation and decision about matters for having profound and pervasive effect on life prospects.  Some privacy rights are necessary to avoid harsh consequences of unjust social stigma and the loss of esteem and respect of family members or society in general.

                        The ability to control personal information, especially information commonly believed to bear on future health prospects, or information likely to reveal something about one's most intimate conduct, is often essential to the prospects for a decent human life.  Some measure of control over the selective disclosure of personal information is also central to the formation of intimate relations and associations.  No one, at least no one with any complex emotional life, can live as an open book entirely, subject to inspection by anyone for any purpose, especially in the context of social and economic relationships marked by gross asymmetries in power.

                        Rights in general, and privacy rights in particular, perhaps have the greatest plausibility when they are the indispensable means for preventing the subjection or subordination of the weak and the vulnerable to the strong and overreaching.  The theory of privacy rights then must figure within a more encompassing theory of political morality in which the fair distribution of benefits and burdens, and hence the fair allocation of control over highly consequential information is articulated.  Privacy rights debates are at their core quite often proxies in a larger set of philosophical debates about social justice.

                        If the interests under threat are important enough, we might then expand privacy protection or, alternatively, simply take the sting out of its loss, or some combination of strategies might be needed given the plurality of underlying interests that are often at stake.

                        And with that, I'll close and we can go on from there.

                        DR. WAGNER:  Thank you.

                        And our next speaker is Dr. Amy McGuire, Associate Professor of Medicine and Medical Ethics, and Associate Director of Research for the Center for Medical Ethics and Health Policy at Baylor College of Medicine.  Her research focuses on legal and ethical uses in genomics.  She's currently studying participant attitudes toward genomic data sharing, investors' practices and perspectives on the return of genetic research results and ethical and policy issues related to the clinical integration of genomics, all germane for what we are talking about.

                        Thank you so much for being here, Dr. McGuire.

                        DR. McGUIRE:  Thank you very much.

                        And I'm in a medical school, so I do have slides, because that's what we deal with.  But ‑‑

                        DR. WAGNER:  But they're only one ‑‑

                        DR. McGUIRE:  Huh?

                        DR. WAGNER:  ‑‑ only one screen?

                        DR. McGUIRE:  That's right.

                        So thank you for inviting me to be here.  It's a real pleasure to be participating in these important deliberations.  And I was asked to come here today to talk to you a little bit about some of the research that we’ve done looking at research participants' perspectives, both on genomic data sharing and genomic privacy.

                        And I just wanted to start by pointing out that, in genomic research, there has really been a culture since the beginning of the field of very open and broad data sharing and access to data.  And this was largely because when the field got started with the human genome project, it was extremely expensive; it was extremely time consuming to generate genomic information.

                        And the idea was that they ‑‑ the field wanted to make it as widely available to other researchers around the globe so that they could advance research as quickly as possible.  So this sort of value of advancing science and making data available has really been embedded within the field of genomics.

                        But of course, any time you share genomic information, at least particularly in a public domain, and most of the early data sharing policies called for the public release of all generated sequence data.  And so individuals' genomic information was put out into databases that were available on the internet and were available to anybody who could access them and figure out what to do with them.  And those policies really have to balance kind of this advancement of science and the utility of making the information available with the privacy risks that it creates for the individuals from whom the genomic information came.

                        So that's why we're interested in privacy.  That's the main consideration, I think, when we talk about the risks of this type of research.  And just to be not quite as philosophical or legal, but we do recognize that the privacy ‑‑ that there is a privacy right that has been recognized by the U.S. Supreme Court as a fundamental liberty deserving of protection.  But I just want to point out that that is not an absolute right, and people waive their right to privacy all the time in everyday life.

                        And I have up here sort of all these different share buttons from our social networking sites that we now have available, just to illustrate the fact that particularly in today's day and age, people are continually, on a daily basis, making decisions to waive their privacy rights, and are doing so because they perceive the benefits that they're accruing from sharing their information to outweigh the risks that they impose on their privacy.

                        And so the focus in genomics has really focused on what are the risks and what are the benefits, although I would argue there has been much less focus on what are the actual research benefits of putting data out there in the public domain.  But the focus has really been what are the risks of making that data available.  And the primary risk is an informational risk here, and it's the risk of somebody being able to be identified on the basis of their genomic information, and that information being traced back to them and people knowing who they are.

                        And so there have been these sort of seminal studies that have ‑‑ that I have up here that have been published that show just how sort of uniquely identifying somebody's DNA data is to them.

                        So the first study here that was published in 2004 is out of Russ Altman's lab and they basically looked at, if you have a person's reference sample and you try to match it to a database that has their genomic information in it, could you identify that they were in that database and what set of data belonged to them?  And they basically showed that, yes, you could do it, and you could do it with access to just very relatively small amounts of genomic information.

                        A couple of years ago, there was another study that was published out of David Craig's lab where they showed, not only could you match it to individual level data that was in that database, but you could also match a reference sample to aggregate data that was put out there in the public domain and you could identify that an individual was within a group of aggregated data and identified that they were part of that group.

                        So these studies have really led to kind of a shift in policy away from sort of sole emphasis on making genetic data publicly available to more restricted data sharing policies and the establishment of what we call controlled access to scientific databases.  So this is like the NIH's database of genotypes and phenotypes.  And the idea here is that, in order to get access to data that goes in this database, you have to go through a data access committee.  It creates an extra layer of protection.  The individuals who want to access the data have to demonstrate that they're bona fide researchers with a legitimate research question before they can access those data.

                        So some people have argued that this is ‑‑ this sort of policy change has been an overreaction to the data showing that DNA is particularly identifiable.  And the reason that they make that argument is because they say, well yeah, it's in theory identifiable.  But what is the actual risk that somebody will go through the effort to try to identify somebody?  And even if they do, what is the potential harm that could accrue from that?  And I don't think we really know that.  I think a lot of the concern around this is the future harm, the potential for harm.

                        People have pointed to the fact that although there are studies that show that a lot of people in the American public fear the potential harms of genetic discrimination that might result from unauthorized access to their genetic information, that there are very few actual documented cases of genetic discrimination.  And other people will now argue that we have this federal law, the Genetic Information Nondiscrimination Act, that at least in theory protects people against some potential harm in the health insurance and employment arenas.

                        So I'm going to skip to my conclusions and then I'll present the data that support these conclusions.  But essentially, I would argue that this emphasis that we've put on trying to identify and quantify the risks associated with making data very available in the public domain or through controlled access databases is very important because we need to be able to accurately describe the risks and the benefits.  And we should put more emphasis on trying to identify the benefits of making that data available to research participants and patients in order for them to make informed decisions about how they want their data shared.

                        I would also say that I think the perceived risks and benefits of sharing data is much more important than the actual risks and benefits.  And so having ‑‑ understanding how people understand these risks and benefits and making sure that it's based on good data but it ‑‑ understanding what they're thinking about this is very important.

                        And I just ‑‑ my final point, which I think is probably the most important thing is, the current structure has really put almost exclusive emphasis on sort of this risk benefit calculation and the idea of protecting human subjects and protecting patients from potential harm.  And what we've learned from our research and what I'm increasingly being convinced of is that, although this is very important and people can care a lot about this, they also care, maybe more, about feeling respected in the role that they're in.  So feeling respected as a research participant, feeling respected as a patient.  And how we can do that I think we need to think more about.

                        So in terms of our study, we were interested in looking at how research participants make decisions about data sharing, what they see is the risks and benefits, and how involved they want to be in the data sharing decision and whether their level of involvement actually influences how they make decisions about how broadly their data want to be shared.

                        We initiated this study sort of when the policies were changing from sort of exclusive emphasis on public data release to these more controlled access databases, and so we were really interested in sort of how broadly are people willing to share their data, either in the public domain or through these controlled access databases.

                        So what we did was we conducted a randomized study of three different types of consent and the only ways in which these consents varied were in how much control they gave individual participants over the decision about sharing their data.  And then we randomized them to receive one of the consents.  These were people that were being enrolled into genomic studies in the Houston area.  And then we followed up, we debriefed them, we showed them sort of all the consent options and we invited them to participate in an interview and a survey with us.

                        So you've received some background reading on some of our findings.  I'm not going to go into great detail about our study or what we found, but I do want to point out that this was a very particular potentially biased group of individuals that we were talking about.  So everybody who we were recruiting were either patients or parents of patients.  They had already agreed to enroll in a genetic or genomic study.  They were highly motivated.

                        They were usually being recruited by their own physician or by a physician at the hospital where they were being treated, so they had a very high level of trust and they also were highly motivated to advance research, particularly for their disease that they were ‑‑ that they were experiencing or their child was experiencing.  So these results may not be generalizable to other segments of the population, particularly some groups that we know have very low levels of trust in research.

                        So this was our basic study design.  We recruited from six genomic studies at Baylor.  These were our three consent forms that we randomized people to and then we debriefed them and followed them up with the interview.  I'm not going to go into details about the consent form, but essentially the top consent form, the traditional, gave them no options.  It basically said, if you want to participate in this research study, we're going to release your data, it's going to go into public ‑‑ into the public domain and you have to agree to that.

                        The binary consent gave them an opt out which basically said, if you want to participate in this research study, everything's going in the public domain unless you opt out.  And tiered consent gave them all three options.  So the most important thing about our findings I think, that I was most surprised about actually was, after we spoke to everybody and showed them all their options, the majority of our participants still decided to release their data into the public domain.  However, a very appreciable minority did not feel comfortable with this and released their public ‑‑ wanted to release their data more restricted.

                        The main point here is that it was sort of about how they were weighing the risks and benefits of this.  And when we asked them how they care about their privacy and how they care about advancing research, they felt very strongly about both of those.  But when we forced them to choose, the majority of the participants were more interested in advancing research than in protecting their privacy.  So I think this is the risk/benefit tradeoff that we see.

                        And my last point here is that, although there is individual variation, what people really cared about here was that they wanted to be involved in this decision because they felt like that made them feel like they were being respected as a research participant.  And I think that's an essential piece of the puzzle here.  So I'll end there.

                        DR. WAGNER:  Very good.  Thank you both for your presentations.

                        I might ask a quick clarification on the conclusions slide that you put in the middle of your presentation.  I'm not sure I understood what you meant, that it was more important ‑‑ perceived risks were more important than actual risk?  I'm not sure what that ‑‑ you mean it's more important for us to address those because people act on their perceptions as opposed to ‑‑ okay.  So it's important to pay attention to it, it's not that the risks themselves are more important.

                        DR. McGUIRE:  No.  I think it's important to pay attention to them because that's the basis on which people are making decisions, and so that's what needs to be addressed.

                        DR. WAGNER:  Got it, got it, got it.  And it's also important that they perceive they are respected and they perceive ‑‑

                        DR. McGUIRE:  Yes.

                        DR. WAGNER:  So perception is a big part of your message.

                        DR. McGUIRE:  Uh‑huh.

                        DR. GUTMANN:  Could you just go back to the ‑‑ because it's the case that on the "strongly agree," more people strongly agree who care about privacy, protecting privacy.  So it's only when you add "strongly agree" and "agree" that you get more who believe about advancing research?

                        DR. McGUIRE:  So it's ‑‑ on the far left, you see that the majority of them strongly agree both in ‑‑ it's important to them that they advance research and that they protect their privacy.  On the far right we said, okay, what do you care about more?  Advancing research or protecting privacy?  And that's where you see that they're ultimately, in their decision making, kind of making this forced choice, and the majority are saying, I care more that I advance research.

                        DR. WAGNER:  So that's what the privacy utility determination is?

                        DR. McGUIRE:  Yes.  That's the tradeoff.

                        DR. GUTMANN:  The most striking thing about that chart, even though you're absolutely right when you get to the far right, is how balanced ‑‑ I mean, the strongly ‑‑ how people care about both, really.

                        DR. McGUIRE:  And that's why I think it's hard to make ‑‑

                        DR. GUTMANN:  It at least shows you that the problem that we're tackling and that you're tackling is a real issue in people's minds.  It's very striking.

                        DR. WAGNER:  And as you told us, you imagined that these would be ‑‑ might be different in a population that was less informed, less engaged.

                        DR. GUTMANN:  And less trusting.

                        DR. McGUIRE:  You might find that people in other populations care less about advancing research because it's not directly impacting them.  As I said, these are patients.  And if that's true, then their balance, their privacy utility calculation would be different. So, yea.

                        DR. WAGNER:  Raju?

                        DR. KUCHERLAPATI:  So with regard to privacy, each of you addressed two separate sets of issues.  One issue is the social issue that you talked about, that you know, if somebody else has the kind of information that you have, that you might be socially be taken advantage of or you would look down upon or so.  The other issue that Amy talked about is discrimination at work and other sorts of things that talked about GINA and the protections they would be able to provide.

                        So I want to understand a little bit more about ‑‑ from Madison.  You're concerned more in terms of social discrimination, rather than professional or job discrimination, that could be protected by genome?  And maybe you could expound on that a little bit.

                        DR. POWERS:  No.  I think it would be very difficult to sort of say which of the two I'm more concerned about.  I think these are distinguishable concerns, they often co‑travel.  Sometimes your social status is diminished and your job prospects and your insurance prospects decline with that.

                        But even when your insurance and job prospects don’t diminish, sometimes a disclosure of some medical information, including genetic information, really disrupts your community life, disrupts your home life, disrupts your family life, disrupts the way people think about you.  And some populations have different kinds of stigma attached to different conditions.

                        A fundamental message out of all of that is simply that medical information, as such, is not distinguishable from genetic information, so‑called genetic exceptionalism, in terms of its sensitivity to either the stigma concern or the loss of consequential social benefits.  And nor is all genetic information on a moral par.  Some are in some communities bits of information, they reveal things that compromise their social standing in ways more than others.  And sometimes it's not the ones you think in some populations.  So there are some populations, for example, that find more stigmas attached to a sort of a cancer predisposition where others are more concerned about HIV and medical care as compared to anything genetic.

                        And so there's a lot of reasons why people might worry that genetic information that reflects perhaps a spurious scientific consensus as to what it might portend, that's a special category as well.  And that is, you know, that scientific knowledge as it moves along may settle for the moment on beliefs about what genetic information has the capacity to show when it, in fact, shows no such thing.

                        DR. GUTMANN:  I just wanted to underline, Madison, you gave a list which I think is important for our Commission.  Just one of the reasons I thought it was important for somebody who deals with privacy as you and others do to do this.  You gave a list of different values that the desire for privacy may reflect.  And I just think it's important for us to recognize that privacy is a bundle.  And I just want to read some of the things you gave and I'll add one.  It doesn't skew what ‑‑ how you want to compare the benefits, but I think it's important for us to have in mind.

                        The values of privacy include unforced deliberation.  If someone knows something about you they can talk to you about it, and you may not want that.

                        Social stigma, even if you're not discriminated against, it may have a stigma to it.  So cancer at one time did.  It no longer does, so the context ‑‑ the other important point you made is that context can matter as to whether these concerns about privacy actually manifest themselves.  But that's a second.

                        The third is the control over private information.

                        The fourth is intimacy associations.  There are some things you share with the meter maid.  There are some things you want to reserve to share with your spouse and friends and not with the general public.

                        The fifth is avoiding discrimination, which is different than just stigma, avoiding real material discrimination.

                        And the sixth, which I just added, is the misuse of information by people with conflicting interests.  And that's not necessarily hierarchical.  You can have a competitor out there for a job, you can be absolute equals.  But if there's some information about you, known about ‑‑ even about your genome, I mean, that may skew an employer in ways that you'll never be able to recognize.

                        So those are just ‑‑ I think it's important to get out there.  And I probably missed some.  But there's a whole package, a bundle of privacy interests that – 

                        DR. WAGNER: Privacy values.

                        DR. GUTMANN: Yes, privacy values.

                        DR. WAGNER:  Christine?

                        DR. GRADY:  Thank you both very much.

                        I wanted to follow up, Amy you said perception was a very important thing here.  And you also started by showing how, you know, we live in a world of social media where people share their private information all the time.

                        So do you have a sense, when people answered this kind of question, what they had in mind when they meant ‑‑ when they said they ‑‑ protecting their privacy was really important?  What were they thinking?  Or do you know?

                        DR. McGUIRE:  I don't know what they were thinking specifically to ‑‑ when we asked them, is it important for you to protect your privacy.  I mean, the entire context of the interview was around, obviously, genomic privacy and sharing their data.  So I'm assuming that it was within that context.  And I think we actually asked the question specific within that context.

                        I can tell you that, you know, we did ask them about sort of what do you see as the main benefits and risks of data sharing.  And they were very clear that the benefit was a social benefit of advancing research generally.  And it was ‑‑ they often focused on my own condition, but it was not a personal benefit that they expected from that.

                        The risks, I think they were a little bit more split on.  And about 30 percent of our ‑‑ of the people in this study felt like the biggest risk was being ‑‑ just being identified, somebody knowing who I am.  Thirty percent of them felt like the risk was the future uncertainty of all of this.  So the biggest risk to me is, I just don't know what they're going to be able to do with this in the future, I'm not so concerned about it now.  And thirty percent of them felt like the biggest risk was potential for insurance and employment discrimination.

                        DR. WAGNER:  Yes, Nita?

                        DR. FARAHANY:  A little bit on this question about what kind of privacy to people actually care about, because that's something that I've been struggling a lot about in this area with respect to generic information, particularly putting it in the broader context of all the other information that's out there about us right now.

                        So everything I type into Google is instantly sent to Google before I can push "send."  And every email that I send has a detailed header which includes every router that it goes through.  And my cellphone, because I carry an iPhone around, can never be turned off and I am tracked always with my GPS location.  And all of that information can be obtained by third parties, and is obtained by third parties, I have no property interest in any of that.  And so it's obtained and sold to other people.

                        And for me, personally, that information in aggregate says a lot more, I think, about me and about my life than does my genetic information.  And so there's a couple of different kinds of interests that I want to articulate that are a little bit ‑‑ slightly separate from the list that Amy just provided, which is a difference between a right to exclude others from my body, like a seclusion interest in privacy versus a secrecy interest in the information itself.

                        And we have traditionally protected relatively well a seclusion interest, right, a right to exclude other people from my body through things like informed consent or a right to keep people out of my home.  But a secrecy interest in information is not something that we preserve particularly well, and all of this data sharing that's happening with my, you know, GPS and everything else seems to underscore that.

                        Given that ‑‑ and I think one more thing is, I think the thing that people are really interested in in their genetic information when they say “privacy” is really secrecy of information, when you give the list of things like being identified and uncertainty about how it's going to be used.  And I think that might just be a losing battle in a way, too, because information gets free.  And so the thing I'm really worried about is actually trying to deal with the uncertainties, and figure out like what are the nefarious uses to which information could be put and how do we put into place, you know, ways to control that?

                        So this is kind of a comment and a question.  Which is, would you agree with that characterization, that the kind of privacy people are really interested in is the secrecy of their information?  Is that information really different, should we treat it with exceptionalism compared to my email or anything else?  And should our focus really be on the flow of information or should it, instead, be on the purposes to which it could be put?

                        DR. POWERS:  That's quite a lot there.

                        DR. FARAHANY:  Sorry.

                        DR. POWERS:  The two papers I provided you track some similar thoughts.  So yes, I'm focusing entirely on informational privacy as it's sometimes concerned, not the kind of privacy of access to the body or decisional privacy, which is the domain in which you can make decisions unobstructed, though they overlap, clearly.  They overlap in their consequence, they co‑travel frequently.

                        I think two things.  One, I suspect that privacy rights in the world we live in, whether it's genetic information, medical information, or a host of other information collected about us, are not going to be sufficient on themselves ‑‑ on their own.  Legally enforceable privacy rights to protect a lot of interests that are at stake.  There's got to be some kind of combination of institutional design as to ‑‑ that affects the uses to which they could be put for information.

                        And perhaps as much of the internet discussions kind of show about privacy these days, it is, as I said in one of my earlier papers that I handed you, is that it's the capacity to aggregate data now of all sorts and all dimensions.  That may be less of a losing battle but still regulatorily possible is to stop aggregations as compared to the losing battle that this doesn't let information out there.  I mean, that may be the place to work.

                        DR. GUTMANN:  The question of whether it's a losing battle is a very different question than the question of how people feel and perceive this.  And I just want to say that I don't accept the premise that people feel ‑‑ just because information is out there on the internet, a lot of people, if you ask them, don't know that it's being used that way.  When they find out, they're very uncomfortable.  We have students who do things on Facebook and people find out and they're aghast at it, and they should be aghast because it's information that has nothing to do with protecting their body.  And it harms them in terrible ways.  So the question of whether it's a losing battle is different.

                        I also agree with Nita that there's nothing exceptional about genomics ‑‑ I mean, nothing categorically exceptional.  There are some things that you can find out through genomic testing that are very hard to find out other ways.  It's not a categorically different thing.

                        That said, it's a big area that people have concerns about the same way that people have concerns about the internet.  The internet may be a more losing battle because of the enormous financial interests out there, but that's a different question than their values that people do care about and are being eroded in that way.

                        DR. FARAHANY:  I agree.  I mean, I used that for illustration to say it seems to me like we're shifting in informational privacy areas to really caring about secrecy.  And those examples show we are not doing a good job of protecting secrecy.  Once people start to become aware of it, we need mechanisms by which we can protect it.  But until now, it has been a losing battle.  People have lost each instance to date of trying to protect that information, and I'm not suggesting they should.  Just that that's where ‑‑

                        DR. GUTMANN:  The lesson there is having institution ‑‑ it's not enough just to have the marketplace deal with this.

                        DR. SULMASY:  We had something of this conversation at our last meeting, too, about the kind of information that genomics is.  In that in a sense ‑‑ it's hybrid information, it's embodied information, so that in some ways it seems to me that it spans both the sort of invasion of the body and person at the same time that it's dissociable from the person and can be treated like other information.

                        So the ‑‑ there may not be anything else we can do about that information consequentially, in terms of laws and protecting where it goes.  But there is a sense in which, if somebody knows, you know, who my parents are, that's one thing.  But if they see my genome, in some sense they know something about me and my parents, and how my parents' genes came together to constitute me.  That in some ways, to know my genome is not just information about me, but sort of information of me, because it partially constitutes who I am.

                        And so I wonder whether in ‑‑ particularly in any of the empirical studies that have been done, that kind of concern surfaces, or whether legal scholars or philosophical scholars thinking about genomic information have thought about it in this kind of hybrid way that I ‑‑ I haven't really heard much about.

                        DR. POWERS:  Yeah, about me, is me, and is also potentially predictable about who I will be, which is different from the information that's ‑‑

                        DR. SULMASY:  Avoiding genetic determinism, because it's not completely ‑‑

                        DR. POWERS:  Oh, you want to avoid ‑‑

                        DR. SULMASY:  Yeah.  But to a certain extent, partially predictive, yeah.

                        DR. McGUIRE:  I mean, I don't know of any real studies that have looked directly at that.  But I think people ‑‑ my sense is from what we've done is that people tend to see their germline genetic information, their own human DNA as them.  We've done similar studies with individuals in the context of like human biome research that's also being released into these databases, and there's more and more sort of studies that are studies that are showing that that might be unique to you as well.  And the participants in those studies so far don't see that as them.

                        But I think as we start showing that you can ‑‑ you know, uniquely identify somebody on the basis of that information, they might seeing that more as part of them.

                        DR. SULMASY:  We three, me and my parasites, that's right.

                        DR. McGUIRE:  But you also raised a really important question about sort of family members and what this information says about family members and whether they should have any veto rights over an individual's autonomy based rights to put their information out there in the public domain, right?

                        DR. WAGNER:  Raju?

                        DR. KUCHERLAPATI:  I just want to raise the issue, taking Amy's list of concerns and so on, as to whether we're over‑blowing the issues about genomic information.  So I mean, in terms of information, you know, look at me, I'm short, I'm bald, I'm a brown‑haired person ‑‑ you know, skinned person.  All of these could be considered to be, you know, traits that could be used to discriminate or people make fun of me all the time about, you know, that Raju is short, right?  It happens all the time, right?

                        And you go to internet, type my name, it be ables to get a lot of information about all of the places that I've been at, right?  And some of them, you know ‑‑ and people can get a lot of information.  Amy talked about some ‑‑ I don't, but other people would put all of the information on Facebook, and they know, you know, people are blogging every day, and they be able to say what they're doing, every minute of every hour of every day, right?

                        So there's a lot of information.  So what is so special about genetic information?  If I gave, you know, the complete sequence of my genome to any of the Commission members here, I can tell you, show you that none of them would have the ability to be able to get an ounce of information about me than what they can see from me.

                        So what is the real concern?  What is it that we're worried about?  I'm still trying to get at that.  How specifically would one be able to use that information?  To make fun of you, Madison, like you talked about?  Or socially discriminate you or some other way?

                        DR. McGUIRE:  I think we have to embed this whole conversation in a historical ‑‑ from a historical perspective.  Because there is a very long history of genetics being used in very socially perverse ways that I think has engendered a lot of concern and fear among individuals.

                        And so I think part of it is that, is we've seen how bad this can go, and do we really want to kind of go back there from a social perspective?  There's also been some arguments about why genomic information is different, because you know, it doesn't change over time and it tells you your risk of certain things in the future, and it tells you certain things about your family.  But I'm less convinced by those things.  I think a lot of medical information is just as sensitive as genomic information, as you mentioned, medicine.

                        But I do think that the historical piece is a very important piece that we shouldn't lose sight of, because it wasn't that long ago.

                        DR. KUCHERLAPATI:  But we shouldn't be prisoners of history.

                        DR. McGUIRE:  I don't think we should be prisoners of it, but I think we should respect the lessons learned and make sure that we're making socially responsible decisions so we don't repeat history, I guess is what I would say.

                        DR. POWERS:  I would have to agree.  I mean, I think that it's not only the capacity for true predictions, but it's a capacity for drawing false inferences.  You know, medical science is a moving target and people will reach more views about genetic determinism than some of us in this room would, and they'll make judgments on the basis of false scientific world view, and a shift in consensus on the science and what it's predictive of.  And it can have all sorts of collateral kind of damage.  And so that's some of my worries as well as Amy's worries that I share.

                        DR. ALLEN:  Okay.  So a question for Madison.  I agree with virtually everything you said about privacy and privacy rights and so forth.  I have my own sort of, you know, list of the kinds of practices and behaviors and actions that raise privacy concerns that would include creating genetic data through testing and sequencing.  Sharing genetic data, using genetic data including using it in ways which is discriminatory, acquiring one's own genetic data.

                        And then the number five one is what I wanted to ask you about because I haven't heard any discussion about it today.  And I don't hear it very often, and I just wonder whether you think it's an issue at all.  And that is the data retention issue.  Destroying genetic data.

                        What are your thoughts about whether we should view destroying genetic data as a kind of privacy interest that individuals have that might, you know, cause us to want to shape policies and practices around a kind of non‑retention default?

                        DR. POWERS:  Let me say that you're right, there's a lot that's actually been written about that, particularly from a legal point of view about storage of tissue samples and what you can do with them later.  And there are lots of things, including old Guthrie Cards that you can do things with that you couldn't do a long, long time ago, right?

                        So one of the facts of life is that we give out bits of information now that we think reveal one thing and in future it's going to reveal something else.  Sometimes that may be important for it to be revealed to the future for someone because it might provide them with a possibility for look‑back in medical care.

                        So I'm cautious about a default position of destruction.  I'm not so cautious about ‑‑ I am more cautious about the idea of maybe having reconsent, stored ‑‑ using information for the purposes to which it was consented for, and not having broad consents to allow them to do just whatever might be imaginable in the future.  None of us are that much a forecaster of things.

                        And so I'm all for science to be able to store and retrieve and utilize in the future, but with a very cautious attitude toward reconsenting, as owners of that might sometimes be.

                        DR. WAGNER:  (Inaudible.)

                        DR. ALLEN: Data retention.

                        DR. McGUIRE:  No.  I mean, I think it's a big issue in terms of ‑‑ I would disagree about sort of the feasibility and justification for reconsenting people for every particular use.  I think people don't have the capacity to anticipate what's going to be done, but they have the capacity to agree to a certain level of uncertainty.  And I think we agree to that.

                        And I think that comes back to your earlier point about social networking which is, people are uncomfortable about what's being done on Facebook, but not uncomfortable enough to get off, usually.  So they are sort of still, you know, making these decisions and nobody's required to use their cell phone or get on Facebook, and nobody's required ‑‑ you know, they shouldn't be required to release their data. But if they choose to do so, then ‑‑

                        DR. GUTMANN:  Some of the concerns have led Facebook to institute certain possible safeguards so ‑‑

                        DR. McGUIRE:  Because ‑‑ yeah, it's good business, right?

                        DR. GUTMANN:  ‑‑ it's ‑‑ what?

                        DR. McGUIRE:  I mean, it's a good business decision and I think we have some social obligation to try to protect people as much as we can.  But ultimately, you know I think people have to make a decision if they're going to take the risks that are ‑‑ as long as they're informed of what those risks are and can understand them, which are all stipulations.

                        DR. WAGNER:  Dr. Powers and Dr. McGuire, we thank you so much for helping us.

                        (Applause.)

                        DR. WAGNER:  We've got about ‑‑ oh, let's call it ‑‑ let's round it up to ten minutes.  Try to get back here as soon after 3:30 as possible.