TRANSCRIPT: Meeting 4, Session 3

Ethics of Emerging Diagnostic and Predictive Tools

Date

February 28, 2011

Location

Washington, D.C.

Presenters

 

Martha Farah, Ph.D.
Walter H. Annenberg Professor in Natural Sciences; Professor of Psychology; Director, Center for Cognitive Neuroscience; Director, Center for Neuroscience and Society; Senior Fellow, Center for Bioethics; University of Pennsylvania

Hank Greely, J.D.
Deane F. and Kate Edelman Johnson Professor of Law, Stanford Law School
Professor (by courtesy) of Genetics, Stanford Medical School
Director, Center for Law and the Biosciences
Director, Stanford Interdisciplinary Group on Neuroscience and Society and its Program in Neuroethics, Stanford Law School
Chair, Steering Committee of the Center for Biomedical Ethics

Download the Transcript

Transcript

 

DR. GUTMANN:
In the interest of time I’m going to get started and other people will come on in.
 
It’s my pleasure to introduce our two presenters for this session, Martha Farah and Hank Greedy — Greely. Excuse me.
 
DR. GREELY:
It happens all the time.
 
DR. GUTMANN:
My apologies. Martha Farah is the Walter H. Annenberg Professor of Natural Sciences at the University of Pennsylvania. She is founding director of Penn Center for Cognitive Neuroscience and the current director of Penn Center for Neuroscience and Society.
 
Dr. Farah is a fellow of the American Academy of Arts and Sciences, a Guggenheim fellow, and the recipient of honors including the National Academy of Sciences, Troland Research Award, and the Association for Psychological Sciences Lifetime Achievement Award.
 
She spent her career exploring many topics within cognitive neuroscience. Her current work focuses on neuroethics, the ethical, legal, and social implications of neuroscience.
 
Martha, we’re delighted that you could join us today. Welcome. Why don’t you start and then I will introduce the next speaker with his correct full name.
 
DR. FARAH:
All right. Thank you very much. I’m very honored to be here and very excited to know that the Commission is starting to think about neuroimaging. It’s an important topic and I hope I can help.
 
In overview what I want to do is discuss some of the ways in which neuroimaging raises new issues and poses new problems relative to earlier technologies including genetics. Then quickly go through some of the main ethical and societal concerns about imaging and sort of triage them for you, but them in different bins that I think —
 
DR. GUTMANN:
Can people hear in the back? You can hear? Okay, good.
 
DR. FARAH:
Okay, great. Thanks. After I talk I gave once a group came up to me very excitedly. I thought they are going to tell me how much they liked my talk and they said, “We’re speech pathologists. You need professional help.” I managed to be inaudible even when I’m mic’ed. I’ll try to speak clearly.
 
I’ll go through some of the well-known and less well-known concerns about imaging. I’ll give you a taste of what’s out there and what’s coming by presenting two examples of, I think, currently problematic applications of brain imaging and finally some concluding thoughts.
 
What’s new? What is societally and ethically new about imaging relative to other technologies, most notably genetics? Well, in behavioral genetics what we do is we relate people’s genes to their behavior. The field has been successful and found some correlations.
 
We have biomarkers for various kinds of disease states versus health for personality traits and intelligence, as was already mentioned. However, this approach leaves out the effect of lived experience on behavior which is substantial.
 
By focusing on the brain here in the middle we are looking at essentially what integrates the effects of genes and experience which, in fact, interact in much more complex ways than are shown by just two separate arrows there. Through their interactions they build a brain and the brain drives behavior.
 
By looking at the brain you can see in its position here on the graph that we are looking at an organ that encompasses all of the antecedent causes of behavior and, interestingly, which is one causal step closer to behavior. That suggests that we might actually get some additional predictive power out of brain imaging versus genetic analyses. That turns out to be the case. We already heard mention of the fact from a previous speaker that personality traits do to a certain extent correlate with specific genes. The correlations are small.
 
Usually less than five percent of the variance in a trait is accounted for by a single gene. Usually more like two or three percent. In the most kind of garden variety fMRI studies it’s more like 25 to 35 percent of the variance in personality traits can be accounted for by patterns of brain activation. So that’s an order of magnitude.
 
In addition, another thing that we should take out of this diagram is that unlike genetic information or overall life experience, by looking at the brain we can read and understand mental states as well as mental traits.
 
Not just long-term personality, intelligence, whatever, but also current mood, intention, am I lying, do I want to buy this thing. That all comes from brain imaging. It’s in principle available and in practice available as well.
 
I want to just talk about some of the upshots of this diagram. What are the distinctive characteristics in terms of societal implications relative to genetic testing and genetic research? Brain imaging gives us a more sensitive measure of the causes of behavior, that order of magnitude. It reflects environmental determinants of behavior, which are important determinants. As a result it captures learned psychological traits. A trait like attitudes towards other races. There’s been quite a bit of research done on this in cognitive and affective research. It is a learned trait. It is unconscious, or a component of it is unconscious, and it correlates with patterns of brain activation.
 
It captures psychological states as well as traits. Again, this is what lie detection or neuromarketing is based on. I didn’t mention this before but it suggests a broader range of targets for intervention to not just say, “Okay, now we see how this person’s brain is working,” but now we can figure out how to change the way it’s working pharmacologically, with transcranial brain stimulation, with deep brain stimulation, etc.
 
Relative to psychological testing and research, because that is a technology, if you will, that addresses some similar questions, it is often more sensitive. I won’t talk about that now but we can talk later about some sort of head-to-head comparisons there.
 
Finally, the information sought in a brain imaging protocol for something like personality or racial attitudes may not be evident to the subject who is being scanned. In both of those cases, the protocols that have been used just involve looking at pictures of people’s faces so there is nothing in the task itself that says they are looking to measure your personality or your racial attitudes.
 
So what are some of the problematic characteristics? Again, problematic in their societal applications. I’m going to sort of do a triage here. Familiar concerns, more novel concerns, and within the more novel concerns that we’ve heard a lot about already, maybe a little exaggerated, and some concerns that I think we really need to pay attention to.
 
First of all, the familiar concerns which I want to emphasize are not an important concern. It’s just that the good news is we have grappled with them in other fields including with genetics and, as the previous speaker was saying, with other kinds of biomarkers.
 
The first one is the validity of any given measure across ages, levels of socioeconomic status, cultures, etc. Almost all the brain imaging research that’s done is done with college undergraduates. There is reason to believe that in some cases the results may not generalize. The precision of prediction may not generalize.
 
Incidental findings, another very important issue. It certainly acquires new wrinkles in the context of brain imaging but basically there is plenty of bioethical precedent available to help us grapple with this. We’ll hear more about this later I know.
 
Similarly privacy of records. Brain imaging, like genetics, gives you information. When you’re acquiring the image you may be acquiring it for one reason but it contains information that can then be later used for other reasons. Again, we’ve been around this block. It isn’t to say we’ve solved all the problems, but they’re not totally new.
 
There are some problems that are specific to brain imaging that have been discussed a fair amount in the neuroethics literature that have a kernel of truth to them.
 
I don’t want to cross them off our list of things to think about, but I think they have in some cases been a bit overdrawn. One is the idea that this is going to lead to mind reading. This gets back to the question that was asked earlier, “Okay, you can see that somebody is looking at the letter M.” That’s not really mind reading.
 
Can we put Hank in a scanner and find out that he’s thinking, “Hmm, every time I’ve spoken with Martha she’s run overtime. I wonder if she’s going to go overtime again and then I’ll have to cheer her up afterwards because she’ll be feeling bad.”
 
We are not in our lifetime going to be able to do that. I rarely make definitive statements like that but I’m willing to make that one. Nevertheless, there is some nontrivial, personal, cognitive, affective information that can be read from brain imaging beyond just what letter you’re looking at. This is largely driven by some of the very exciting new pattern classification statistical techniques that have been used. We can talk about those more later, too.
 
A sort of familiar refrain is, “Oh, they are not even imaging the brain. They’re looking at blood flow or oxygenation.” I mean, yes, but the point is it’s a measure that is not perfectly but quite well correlated with brain activity.
 
Astronomers when they want to know the chemical composition of a star don’t fly out there and bring some stuff back in a test tube. They look at absorption spectra. The tool doesn’t have to be transparently related to the subject of study.
 
Statistical voodoo. There is a lot of statistics that intervenes between the acquisition of the data on a scanner and the scientific inferences.
 
I think Bruce didn’t have time to go through this but it’s really a lot of processing. There’s so much processing that they divide the processing into preprocessing and processing but the preprocessing is processing. There’s a ton of statistics. They are done for good reasons. They are done to more accurately and validly make inferences.
 
All that statistics arouses suspicion in some cases and people sometimes make mistakes. The paper that was so helpfully named “Voodoo Statistics” a couple of years ago criticizing a certain groups of papers in the social neuroscience field sort of underlined the fact. The point is we know how to do statistics. We are getting better and it’s not an inherent problem with the method.
 
Finally, there is the worry that images are inordinately persuasive that you tack a brain image on any crazy claim and everybody believes it. I think that is overdrawn as well.
 
Some new challenges. First there is kind of related to the last limited public and policy maker understanding of brain imaging. They look like pictures but they are inferential. They are not photographic. What that means is that if you don’t know what task was being used, what contrasts were being analyzed and so forth, you don’t really know what the image means. It sort of invites misunderstanding.
 
There’s also a very common reaction that if it’s in the brain sort of biological, it must be innate. If it’s innate, it must be immutable. Of course, taxpayers are not so happy about investing their money and resources in trying to change or improve things that science has shown are immutable. We have the whole bell curve mess to refer to there. The brain does not mean genetic. Remember that first picture. Enough said.
 
Finally, brain images are no more real than behavioral evidence. For some purposes you may prefer behavioral evidence for legal purposes. There was a big brouhaha recently about a PET study that showed that cell phone use activates the part of the brain near the ear.
 
Well, yes. I mean, I suppose that might make you think again and look more carefully, but the relevant data is the epidemiological and the animal experiment data. If people aren’t, in fact, having brain diseases, then who cares if the PET scan activates.
 
DR. GUTMANN:
Martha, you have one-and-a-half minutes.
 
DR. FARAH:
Okay.
 
DR. GUTMANN:
Thanks.
 
DR. FARAH:
I’m going to skip there and I’m going to just say lie detection here now. In my opinion, its validity has not been demonstrated. It has many potential uses including national security, law, personnel screening and so forth. There have been two attempts to use it in court.
 
I was at the Daubert hearing in Memphis. It was much closer than I ever would have dreamed. Diagnostic imaging of psychiatry. Clearly imaging plays a vital role in psychiatry research. There is a broad consensus that it has a role to play at this point in diagnoses. Yet, we have Daniel Amen’s clinics and several others saying, “My testimonial. They imaged my brain and now I am better.”
 
So to summarize, what we have here is a combination of wishful thinking. And I want to emphasize that there really is great good that can come from this. There are a lot of problems in society that come down to needing to better understand human behavior. Brain imaging is helping us to do that.
 
In terms of the question I got from some of the staffers, you know, what’s the time frame, right now brain imaging is delivering useful information to neuromarketers. I would be surprised if in 10 or 20 years it was not delivering useful information for psychiatric diagnosis.
 
And I would be surprised if in 10 or 20 years, assuming there is research being carried out during that time, we don’t have a pretty clear answer of whether or not it is useful for lie detection. Right now we don’t have the science to back these things up. All we have is the allure of science. Oh, it’s scientific, it’s objective, and so forth.
 
You combine that with the profit motive and we start pushing these technologies out in the marketplace before they’re ready. We don’t know enough about the validity and accuracy and so there’s great potential for harm.
 
Can I give you one more slide? Okay. So concluding thoughts. I think it’s important to view the challenges and the promise of fMRI in the context of something broader and that is the final coming of age of cognitive and affective neuroscience. Not just that we can image the brain but we can understand it and we can control it and improve it.
 
In addition, this growing tendency of laypeople from all over to think of themselves increasingly as their brains. I think it’s worth bearing that in mind —
 
DR. GUTMANN:
Martha, if it’s one slide, you’ve got to go quickly. Thank you.
 
DR. FARAH:
Promising new technology. Again, I sort of focused on problems. Bioethics tends to bring that out in people but there’s a lot of good, a lot of problems that can be solved. It’s the premature uses that are bogus and potentially harmful.
 
For policy I want to point out that policy does not equal regulation. I think the thing to do here is shine a nice strong light on these applications. It can be very sanitizing. There’s little here that couldn’t be corrected by sort of saying, “Look, here’s the validation studies of lie detection.”
 
This would also hasten the day that we have more socially beneficial applications of neuroimaging. I want to say that if we don’t pursue this stuff in an effort to protect our citizens, try to regulate it away, other nations will develop these tools. I think the U.S. should be helping lead in the science and also in the ethics. Thank you.
 
DR. GUTMANN:
Thank you very much.
 
Hank Greely is the Deane F. and Kate Edelman Johnson Professor of Law at Stanford Law School and a Professor of Genetics at Stanford University School of Medicine. Previously co-director of the Law and Neuroscience Project Center at the University of California.
 
Dr. Greely is the current chair of California’s Human Stem Cell Research Advisory Committee. He also chairs the Steering Committee for the Stanford Center for Biomedical Ethics and directs Stanford’s Center for Law in the Biosciences and the Stanford Interdisciplinary Group on Neuroscience and Society. Many hats.
 
Dr. Greely’s research focuses on the implications of new biomedical technologies, especially those related to neuroscience, genetics, and stem cell research.
 
Welcome, Hank. We are looking forward to hearing your views. It’s clear that you bridge the broad areas that we’re discussing today so thank you.
 
DR. GREELY:
Thank you. I also don’t use slides so I’m just a talking head here. Thanks for the opportunity to do this. It seems to me that trying to figure out how best to aim the public resource that is this Commission, its staff, and the public attention and respect that its findings will get is really intellectually a very fascinating and difficult problem for me.
 
For you I’m sure it’s a fascinating, difficult, and very practical problem. I hope to try to help you with that a little bit. Three caveats. First, there are 3.4 billion base pairs in a haploid human genome. There are probably 3.4 billion ethical, legal, and social issues. I’ve got now 14 minutes and 17 seconds. I’m only going to talk about three of them.
 
That doesn’t mean that there aren’t a lot of other good issues. In particular I am sorry that I’m not going to focus on the forensic and other nonmedical uses which I think might be a very good field for you to explore.
 
Second caveat. Much of what I’m going to say will be things that have been able to be done before but they haven’t been done before to the same extent. I think this is one of those areas where differences in degree become differences in kind.
 
It’s like automobiles. In 1900 there were 20,000 autos in the world and they were curiosities. By 2000 there were half a billion autos in the world and they changed the planet. I think a lot of what we’re doing here, a lot of what we’re talking about similarly is change in degree leads to change in kind.
 
Let me go straight down to the three things I want to talk about. Noninvasive prenatal genetic diagnosis, clinical use of whole genome sequencing. Those are two types of predictive tests. Then bridging and combining those along with some of the other things we’ll hear both in genetics and neuroscience, issues of how to deal with the data derived from those kinds and other kinds of tests.
 
Noninvasive prenatal genetic diagnosis. We’ve been able to do prenatal genetic diagnosis for 40 years now. It is a technology that has been around for a long time. This past year in the United States somewhere between one and two percent of pregnancies involved prenatal genetic testing through amniocentesis or chorionic villus sampling.
 
So not very many, one to two percent. The rate limiting factor has been the difficulty of getting DNA samples from fetuses. Fetuses are carefully protected from all sorts of outside influences and it’s not easy to get their DNA. Amnio and CVS are not fun, pleasant, cheap, or entirely safe procedures.
 
My wife had one of each and she didn’t like either of them. It turns out, though, that there is a lot of fetal DNA to be had in the bloodstream of the pregnant woman. The DNA is chopped into very small pieces but by the fifth week of pregnancy five to 10 percent of this cell free DNA chopped up in a woman’s blood serum is from the fetus.
 
The rest of it is, of course, her own DNA. It is now possible to do clinical testing of the fetus with that DNA based on not an invasive procedure, but a simple blood draw, 10 milliliters of blood. It looks like it can be done as early as the fifth week of pregnancy.
 
There is no risk of miscarriage. The procedure itself is simple and cheap. In fact, it probably doesn’t even require another phlebotomy because pregnant women getting prenatal care have their blood drawn a lot. It’s just one more tube of blood drawn.
 
This is currently in clinical use in Europe for Rh factor determination. If the woman is Rh negative, you’re interested in whether the fetus is Rh positive. It’s in very limited controlled clinical use in Great Britain for sex determination when the sex would make a medical difference.
 
Typically when there’s an X-linked disease so you are worried about a male fetus. Two recent large trials were published within the last two months of its use to detect aneuploidies, particularly trisomy 21 which leads to Down Syndrome. Both showed accuracies in the 98, 99, 100 percent range for sensitivity and specificity.
 
All that is interesting. All of it is being driven by increased whole genome sequencing or increased sequencing ability to shotgun sequence all these little bits of DNA and then put them back together.
 
The real payoff is, I think, two to five years away when people start being able to do this for single gene diseases and look at a space of 50, 80, 100 different Mendelian traits, disease traits or non-disease traits, and make predictions about the fetus based on that in a noninvasive, inexpensive, nonrisky way.
 
I believe that when that happens the percentage of women’s pregnancy, the percentage of pregnancies in the United States that get tested will go from under two percent to over 50 percent and maybe over 80 percent. Issues will follow.
 
Controversies will revolve around the — the big controversies will revolve around abortion, will revolve around eugenics, and will revolve around disability rights and disabled communities.
 
There will be hard questions about what kinds of tests one should allow for serious diseases, for nonserious diseases, for sex, for nondisease, nonsex traits, for selection for fetuses, ultimately babies who will have some disabilities.
 
Maybe the oddest one but one that’s been discussed occasionally in the context of certain disability communities like the deaf. Legislatures will try to decide when this should be allowed. That could be a good issue for this Commission to look into. There are also some less cosmic but important questions.
 
Right now informed consent for amniocentesis is pretty real in the sense that typically a woman has some time to think about it. Usually this is the result of a screening test for Down’s or a family history.
 
If you’re going in for amnio and the big needle, you know you’re going in for something serious, it is unclear whether women signing a form saying they are getting genetic testing based on a blood draw will know what they are getting and will appreciate the answers that come back on 50, 60, 80, 100 different Mendelian disorders. There will be interesting issues of access and insurance payment.
 
One of the big issues will involve Medicaid. Medicaid pays for 40 percent of the births in this country, an astounding figure. Medicaid is a state and federal joint program. Some states will undoubtedly pay for this if it becomes clinically acceptable. Other states may not.
 
There will be access differences from state to state and based on income level. Those are ethical questions. Then, of course, there will be the regulatory and ethical questions of the safety and efficacy of the tests themselves.
 
Second issue, whole genome sequencing for clinical purposes. In May of 2009 my Stanford colleague Steve Quake sequenced himself. He actually had his post docs and grad students sequence himself for $48,000, which counted their time at zero.
 
But he sequenced himself and about a year later some of us thought, “Gee, it would be really interesting to see what we could learn medically about Steve.” We published that in The Lancet. There were some really interesting things.
 
There was a bunch of stuff that wasn’t very powerful and wasn’t very interesting, but he learned he is at probably some heightened risk for sudden cardiac death. When we went through this we realized there were about 90 to 100 things we thought he should know about. At three minutes per thing that’s five hours of genetic counseling.
 
I can talk for five hours, although you won’t let me today, but most people can’t. Who is going to do that counseling? Who is going to listen for five hours? Who is going to pay for five hours of counseling? Whole genome sequencing will raise big questions with accuracy both of the sequencing machines test methods, with questions of accuracy of the analysis of the interpretation.
 
Who decides which variations mean what level of risk? That’s going to be an enormous problem and very difficult for us to figure out a useful medical and social way to deal with. There will be huge questions of the physician’s role. Will this have to go through physicians or other healthcare professionals?
 
Personally I think it should, although we’ve already heard something about the conflict between paternalism and libertarianism with respect to the genome. Will those physicians be liable for potentially? I’m a lawyer, liable for.
 
Will they have a professional duty to talk about everything that’s in the genome or only the thing they’re looking for? I worry that at some point this will get cheap enough that if you’re interested and say whether someone is susceptible to Lynch syndrome, a high colorectal cancer syndrome. It will just be cheaper to get the whole genome.
 
If you get the whole genome do you tell them about just the Lynch syndrome? Do you tell them about the seven autosomal recessive diseases they are carriers for? Do you tell them about long QT syndrome or their APOE status? What do you tell them about and how often do you tell them?
 
Their genome won’t change but our interpretation of their genome will change every week. Is there an obligation to rerun the material on some regular basis? How do we inform and educate physicians so they can talk to their patients?
 
Physicians who, for the most part, got almost no genetics in medical school and don’t actually have a lot of spare time to learn new things. How — and I think this is going to be the hardest part — how in the world do we educate the patients?
 
How do we convey to them what all this means in a way that doesn’t lead them to make a mistake in either potentially dangerous direction? One mistake I am terrified about is a woman is told she doesn’t have a high risk of breast cancer so she decides she doesn’t need mammograms.
 
That could be a fatal error. Her risk may not be 80 percent but it’s gone from the population-wide 12 percent to about 11.98 percent. Alternatively someone is told their risk of Alzheimer’s is two times normal and not told that’s about 20 to 30 percent.
 
Most of the time most people with one APOE4 allele are not going to get Alzheimer’s disease and overreacts, decides not to go to medical school or law school. Instead is a beach bum. Pulls out all his retirement and savings, commits suicide, does something else foolish.
 
How are we going to make sure — how are we going to help people deal with this enormous flood of information in a way that is comprehensible to them? There are other interesting questions.
 
What do we do about kids who as you’ve heard the general view is we don’t tell them or their parents things that aren’t actionable during their minority. What happens if we’ve got the whole genome sequence? Who learns? Who knows?
 
Point three, data. What I’ve talked about so far — the prenatal and noninvasive prenatal genetic testing and the whole genome sequencing, whole genome or other broad genome sequencing. I’m not sure it will ultimately be whole genome.
 
It’s really only as important as the things we can tell people about. If we can tell people about a lot of important disease risks or a lot of interesting traits to parents, they will use it a lot. If we can’t tell them about very much, they won’t use it a lot.
 
But the way we find out what it means is by looking at vast numbers of people and vast numbers of gene sequences and GINA types and vast amounts of clinical data. Well, how do we do that? First there are questions of privacy, who all can have access to this.
 
In a way this overlaps with the forensic discussion because you don’t need a 300 million person CODIS database if everybody’s sequence, including their CODIS markers, are in their health records and the FBI or the local police or a civil litigant arguably can subpoena those from the health center. Things like that have happened already.
 
Second, people talk about anonymity. We then can have anonymous records for clinical purposes that defeats the purpose. Maybe you say we’ll strip the identifiers out for purposes of research but that is false hope. Any rich database, any database with health records, can be re-identified. Almost anybody.
 
Lots of people in that database should be able to say, “Aha, based on that information that’s got to be James W. Wagner. Only person it could be.” Even if there’s no name, social security number, etc., in there. We are pretending this is not a huge problem. Re-identification is a huge problem.
 
Third big problem here, I think, is the issue of incidental findings. It’s been talked about. I’ll just say it applies here as well. It applies both in the direct clinical sense. It applies in other research senses.
 
I think the fourth, though, the last, is maybe the most important. Might be something that the Commission can do something about, although it might be too big an issue for it. What do we do about people’s consent? If we’re really interested in getting all this sequence data and health data, electronic medical record data, useful, incredibly useful potential resource, do we have to ask everybody about each study we do? Do we have to ask them whether they want to be researched on at all? Do we say this is part of your duty as a good citizen, as a beneficiary of the healthcare system to participate in this controlled work? What we are doing now I think is in some ways the worst of all worlds. We are using data broadly.
 
When people don’t understand that it is being used broadly, they give it for what they think is a narrow purpose with a little bit of language tucked in at page 17 of the consent form saying, “And may be shared for other research or other medical purposes.” So people who participate in a Parkinson’s disease and genetic study in Rhode Island find out, or they haven’t found out yet, that their genotypes and phenotypes are in dbGaP, an NIH database which is accessible to researchers all over the world for research on anything they want to be researched on. That question of control is a tricky one. What should we require and should the culture change in terms of what we require?
 
I think that is a potential land mind right now because I think there are a lot of people whose DNA and materials and data are being used for research who don’t know what it’s being used for and will be annoyed and unhappy when they find out. Three quick issues. There’s a lot more I could talk about but with four seconds left I’ll end.
 
DR. GUTMANN:
Thank you very, very much. We have opened up here not only what the science does but what the potential implications are for our role as a Commission as giving advice on the ethics and social responsibility of new technology.
 
Let me just ask you — let me start with Martha and ask Martha a question because this is the new science. You said one of the red herrings our there is mind reading and neuroscience is not capable of mind reading, nor is it in the foreseeable future capable of it.
 
Could you say a little because there is a very big gap in neuroscience and it’s often misunderstood that the brain isn’t the same as the mind. Let me just give you an example. There is a big issue about causality and how much we can predict and, therefore, control behavior.
 
I was the keeper of the clock here. Let’s just say I had told you to stop after 10 minutes. I said, “Martha, stop.” And you said, “No, you gave me 15 minutes.” I said, “Okay. Go on.”
 
The reasons you gave me, which is 15 not 10, actually caused me to change my behavior, but the cause were reasons. Right? The cause was nothing out there. It was a reason that caused me to change my mind. How does neuroscience understand how reasons are causes?
 
They are causes. I’m not saying that the reasons came from some spooky place, but reasons cause us all the time to change our behavior. Isn’t that a limit on what neuroscience right now can do predictively?
 
DR. FARAH:
Oh, boy. They didn’t tell me there was going to be metaphysics at this thing.
 
DR. GUTMANN:
It’s not metaphysics. There’s nothing metaphysical about what I just said.
 
DR. FARAH:
No, but I feel that there is a sense in which a good — I mean, a good answer to that question does somewhat get us into metaphysics. Let me sort of —
 
DR. GUTMANN:
You’re treading on my ground.
 
DR. FARAH:
Here’s the thing. That hypothetical exchange about the time definitely has an explanation in terms of reasons. It also has an explanation in terms of patterns of neurofiring.
 
DR. GUTMANN:
Sure.
 
DR. FARAH:
The metaphysics is basically the incredible mystery of why that physical story comports so well with the story in terms of persons and reasons and goals and desires and all those kinds of things.
 
For some purposes the right — for some purposes the useful analysis, and you might even say right analysis, is the analysis in terms of people’s reasons for doing things. But for many socially important problems, a very useful kind of level of description is in terms of chains of physical events.
 
In some sense it’s an empirical question whether you can do better predicting behavior with a kind of a reasons-based account or with a neurons-based account.
 
DR. GUTMANN:
In this case there’s no doubt that you could do better with the reasons than the neurons. Nobody has a clue as to what neurons caused you to tell me accurately that it was 15 minutes.
 
DR. FARAH:
By the way, somebody told me 17 but I won’t out the staffers. Anyway, here’s the thing. Yes, this would be a case where if the question was is neuroscience going to help us with this kind of situation managing it better, improving communication or whatever, the answer would clearly be no. I want to —
 
DR. GUTMANN:
That’s all I wanted.
 
DR. FARAH:
But I want to dig in my heels and say that there are many important situations from marketing to education to security screening and so forth where those neurons tell a lot.
 
DR. GUTMANN:
I agree.
 
DR. FARAH:
Okay.
 
DR. GUTMANN:
So it’s not — but what I think we’ve just established is it would be a mistake and futile to think of it as an either/or.
 
DR. FARAH:
Absolutely.
 
DR. GUTMANN:
That is, if you don’t integrate the causality of reasons along with the causality of external environment on the brain and the mind, you don’t have the full story. That’s all.
 
DR. FARAH:
Agreed. Agreed.
 
DR. GUTMANN:
Okay.
 
Yes, Nita.
 
DR. FARAHANY:
I wanted to focus on your third category there, the identification category. I wanted to look at it from the perspective of both neuroimaging and genetic information. We have the Human Genome Project. There is also the Human Brainome Project and attempts to try to do as much neuroimaging as possible to try to see the commonalities across different individuals and also differences and do a lot of the same types of population statistics that we have done on the genome side with the brainome side.
 
Obviously there’s different information that the two fields can yield but I wonder if you could speak to some of the ethical issues that you raised to see if you think there are meaningful differences for data collection from neuroimaging than genetic collection.
 
DR. GREELY:
I think there are differences. I’m not sure I think they are meaningful. I think everything I said with respect to — almost everything I said with respect to the questions of genetic data could be applied to neuroscientifically acquired data also applies to old fashioned health data, that these questions of incidental findings of consent, of privacy, and of re-identification, lack of anonymity apply. This could be a good cross-cutting topic for the Commission.
 
I also think getting back to the issue of forensic, forensic writ broad as encompassing not just courtroom use, and not even necessarily criminal or legal system use but nonmedical uses, both genetic technologies and neuroscience technologies, I think there are similarities between some of the marketing issues whether it’s done based on someone’s genome or based on somebody’s MRI scan.
 
Certainly on many of the criminal issues, the criminal defense issues which you’ve been following very closely. Defendants are saying, “My brain made me do it.” They are also saying, “My DNA made me do it.” I think that might be another overlapping area.
 
Whether as a strategic matter you’re better off looking for an overlap period that deals with both of these or focusing on just one is, I think, a hard question. There are pluses and minuses to both of them and I’m glad I don’t have to make the decision.
 
DR. GUTMANN:
Raju.
 
DR. KUCHERLAPATI:
Thank you for your presentations. They were very terrific. I want to address a question to Hank. This morning several speakers had talked about this issue about what should we tell the individual when whole genome sequence was done and whether you should inform them only about those findings that are actionable versus those findings that may not be immediately actionable.
 
Can this problem be solved by just asking initially from the individual as to whether what is it that they would like to have and would that solve the problem or there are other types of issues that are very difficult?
 
DR. GREELY:
I think that could help the problem. I would be reluctant to say that it could solve the problem. For one thing, I think it’s a different problem in different contexts. It is a somewhat different problem if you’re a researcher who stumbles on something in your research as opposed to the treating clinician.
 
This is your patient who you’ve ordered this genome test for clinical purposes. I think that has somewhat different obligations and liabilities. I do think exploring in advance before either the research testing is done or the clinical testing is done what kinds of information the patient or subject would like can be a very useful step.
 
I worry a little bit, though, that the subject or the patient doesn’t — it is one thing to answer in the abstract, “Yes, I think I would want everything. No, I don’t think I would want everything.” It may be a very different thing if the patient is then confronted with a very specific kind of disease or very specific kind of question.
 
Getting some information in advance I think can be helpful. It may not always be determinative because the patient’s abstract answer might be quite different from the patient’s actual wishes or the research subject’s actual wishes in a concrete case. Let me give you an example.
 
Subject may say, “I don’t really care about getting predictive stuff. It doesn’t matter to me.” But it turns out that if you can predict, say, Alzheimer’s disease, this particular patient or subject’s parent died of Alzheimer’s disease and Alzheimer’s is something that in particular this person really really cares about and would, even if there is no intervention, like to know about. When she answered the question up front in the abstract, she wasn’t thinking about Alzheimer’s disease in particular.
 
Maybe if you’ve got Alzheimer’s findings, there should be some re-exploration of the specific kinds of findings you might be able to provide. I do think it’s the sort of thing — this is, of course, one of the huge reasons why I think direct-to-consumer information is so risky.
 
If it’s only SNP chips with the kinds of fairly weak associations that they’ve had right now, there is not all that much of power but if you get to whole genome sequence and there are very powerful associations, I think almost everybody is going to need a qualified skilled professional to help them understand what this means.
 
DR. GUTMANN:
Thank you.
 
Anybody in the audience have a question before we take a break? I see a hand back there. Yes. Please introduce yourself. Just tell us who you are. Thank you.
 
LISA:
— and I’m just here because I’m interested. My question is for Dr. Farah. You mentioned that we’re a long way from mind reading. Yet, a couple of years ago I saw on the History Channel they showed a mute paraplegic where they had implanted a chip in his throat.
 
His thoughts were going onto a computer screen. Maybe I’m confused as to the difference between mind reading and being able to capture human thoughts onto the screen. I’ll let you respond.
 
DR. GUTMANN:
Great question. Thanks.
 
DR. FARAH:
Okay. Well, I’m thinking there are two possible technologies that you — again, it’s very strange to talk to somebody with your back to them. Forgive me, Lisa. There are two possible technologies that you could be talking about.
 
One does sense microscopic movements in the vocal apparatus and basically looks at subvocalization. It’s basically a way of just using speech that you can’t hear. I think it would be a cheat to call that mind reading.
 
There have also been some early human trials of chips implanted in the brain on parts of motor cortex that allow a paralyzed person to move a robotic arm or move a cursor on a screen or type a message on a computer screen using thought alone by basically learning to kind of think about movements and direct the cursor or the robotic arm accordingly.
 
There is a sense in which that’s mindreading but consider this. It’s basically translating brain activity into movement in at most three-dimensional space. Nontrivial but, you know, to read a thought like Hank is thinking now, “Oh, poor Martha. She blew it again. She went over and the Chairman had to tell her to end early.”
 
That kind of thought if you think of it as a spatial analogy is a point or a path in an extremely high dimensional space. All the different dimensions of meaning and intention that a person could conceive of, that is what we are a million lights years away from doing.
 
DR. GREELY:
Hank is actually just thinking it’s too bad for Martha that the Chair of the Commission is the President of Martha’s university.
 
DR. GUTMANN:
Not only do I admire Martha but she has tenure as well.
 
Thank you all. We’re going to reconvene. This opens a host of issues and it gives us a lot to think about whether our minds can be read or not. Also you don’t need to read our mind to know that we will reconvene at 1:15 promptly so thank you all very much. Thank you again for wonderful presentations.