TRANSCRIPT: Meeting 4, Session 2

State of the Science

Date

February 28, 2011

Location

Washington, D.C.

Presenters

Bruce R. Rosen, M.D., Ph.D.
Professor of Radiology, Harvard Medical School
Director, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital

James P. Evans, M.D., Ph.D.
Clinical Professor and Bryson Distinguished Professor of Genetics and Medicine, Department of Genetics, University of North Carolina, School of Medicine

Download the Transcript

Transcript

DR. WAGNER:
Let me ask Professors Rosen and Evans to come to the table. Welcome, Bruce.

I did not see James Evans. There he is. Professor Evans, I didn’t get to greet you this morning. Are you James, Jim?

DR. EVANS:
Jim is fine.

DR. WAGNER:
Thank you. Welcome. We are pleased to have you. In fact, Professor Evans is the Bryson Distinguished Professor of Genetics and Medicine at UNC Chapel Hill, former chair of the Secretary’s Advisory Committee on Genetics, Health, and Society, an advisory committee of the Department of HHS, Health and Human Services.

His research focuses on cancer genetics, pharmacogenomics, and the use of next generation genomic analytical technologies, and broad issues of how genetic information is used and perceived.

It is a delight to have you here.

Our other guest is Bruce Rosen, Professor of Radiology and Health Sciences and Technology at Harvard Medical School and Mass General Hospital. His research includes functional magnetic resonance imaging, fMRI, quantification of physiological parameters with multi-modal imaging approaches and molecular imaging.

Bruce, again, welcome and thanks. It’s good to have you here. I think we’ve got your slides cued up first. We are already slipping a little behind schedule so we’ll forgive you if you speak swiftly and allow us more time for a few questions.

DR. ROSEN:
Great. It’s a pleasure to be here. Thanks very much for the invitation. I’ll try to quickly work through a broad overview of that technology knowing that several of the talks to come are going to talk more about the applications of these tools.

Given the context of today’s meeting I pulled up an old slide — not that old, about a decade ago — and a cover of The Economist, The Future of Mind Control. I thought that the headline of the article was an especially nice one to start with. It started with a quote.

In case you can’t read it, it says, “Genetics may yet threaten privacy, kill autonomy, make society homogeneous, and gut the concept of human nature.” In other words, destroy Western civilization, but neuroscience could do all of these things first.

DR. GUTMANN:
And sell more copies of The Economist.

DR. ROSEN:
Exactly. That’s right. But, of course, what did they pick on when it came to neuroscience? It was fMRI screening for lie detection. They ultimately go on that these raise questions about brain science that go right to the heart of what it is to be human.

As important as the genome is we know that ultimately it is expressed in the phenome. That correspondence is a tricky one and the ability to see what the brain is doing may get us closer to certain elements of exactly this kind of human behavior.

So why imaging? Why is it important? We spent a lot of time, and will, talking about the genetic events that happened within the brain. But of course, the brain acts at many, many levels from the DNA to what is happening in single cells to how groups of neurons go together to distributed groups of neurons. And I suppose at the sociological level how distributed brains interact with each other.

Just take a problem, say, like substance abuse. Substance abuse is abnormalities within the dopaminergic system. It can be studied at the level of neurotransmitters and neuroreceptors and genetics and epigenetic effects, but it’s also a sociological effect.

To get a complete handle of such a disorder really requires us to span these different levels. I think that is one of the really powerful applications of neuroimaging in that it allows us to look at molecular events, what is happening at a cellular level, what’s happening with groups of cells, but also do so at a systems level. In other words, it forms a bridge between what is happening across the entire brain and what is happening at the molecular level.

What I’ll do today is just go through kind of some of the seminal technologies just to set the stage for future talks that are coming. There are a variety of different tools that neuroscientists are using to study the brain at a systems level starting with anatomical MRI, functional MRI, and then other technologies such as magnetoencephalography, MEG and EEG, positron emission tomography (or PET scanning), optical imaging.

Then, of course, as those interface with ways that we can directly perturb the brain such as transcranial magnetic stimulation, or deep brain stimulation, or cortical stimulation. All of these tools are used disparately and sometimes all together. I’ll admit that I grew up in the world of MRI, magnetic resonance imaging, so I have a somewhat biased view of this and perhaps I’ll start with MRI.

These are just some pictures just to give you a sense for the kind of quality of data that we’re looking at these days. These are with our highest field strength magnets but images of this quality will become widespread and soon ubiquitous. That’s been the inevitable progress.

As you can see getting closer and closer exquisite detail in our ability to visualize the brain, and with that, of course, comes the increased prevalence of incidental findings when you are able to see the brain at this level of detail.

We can also, of course, see substructures within the brain, critical behavioral function such as this detailed look at the hippocampus here, obviously of great importance for normal memory function and important when memory function goes awry in diseases such as Alzheimer’s disease.

Here we can see the clear difference between the healthy plump hippocampus and the atrophied hippocampus in this patient with Alzheimer’s disease. These tools give us an increasingly clear way to quantify these changes and, as a result, do give us an early indication of these critical disease states. And I’ll use the example of Alzheimer’s disease just as one of many throughout the talk.

Another element of this improved image quality is our ability to do improved computational analysis to the data, not something that radiologists do routinely today but certainly something that we will in the future.

And in the same way our blood tests give us values and normal ranges and little stars next to them when patients are outside those normal ranges, I think it’s going to be clear in the future that imaging data will also provide us quantitative data on different brain structures, their shapes, their volumes.

And we’ll have automatic readouts so that when somebody goes in for a test we’ll get hippocampal volumes matched against age match controls, and then statistical stars next to those that fall outside that range, a range of quantitative data some of which perhaps in the setting of memory disorders and hippocampus we know what to do with. Others, for example, abnormalities or small volume changes in the prefrontal cortex we may not entirely know what to do with.

These tools are already being used today to study a wide variety of diseases, as well as normal developmental changes such as in aging, dementia, but also diseases like schizophrenia, Huntington’s disease, even things like animal phobias. It’s almost the article du jour: which new disease is associated with changes and the morphometry of the brain, its basic shape and anatomy.

The other element of anatomy that is rather emerging in this field is so-called connectional anatomy, the anatomy that tells us how the brain is basically wired. We may look at pictures like this from a textbook that now is several decades old and think that this is a solved problem but, in fact, in humans there is precious little that is actually known about the wiring of the human brain.

In animal studies, of course, there are ways to trace out from one part of the brain to the other the viable connections. There are no ways that we’ve been able to encourage our grad students to allow us to apply those techniques in them so we have to go to noninvasive tools and a number have been developed over the years.

The most common one is the so-called MR diffusion tractography. It’s basically a measure of how water molecules wiggle in the brain. It turns out that they preferentially tend to wiggle along the direction of the white matter fibers. As a result we can product pictures like this one that give us some sense for the wiring diagram of the human brain.

People are now beginning to look at this data in great detail to be able to look using those automatic segmentation tools that I described just previously and say, how does any given part of the brain connect to all other parts of the brain, and then build up pictures of this whole brain connectional network, the emerging field of human connectomics.

Now, there are many very important questions that technology doesn’t answer. For example, how this develops over time. We can see tremendous changes in animal models of this developmental connectivity.

If somebody can click the mouse — click on this, you might be able to get this movie to go. I don’t have that ability here. Just do the mouse and mouse over it and click it should work. No? Okay.

This is supposed to be the development of the gyrification over time in the newborn. The short answer is we do not know — you can try that and see if that goes. Give it a click. No? Okay. That’s the wonders of Microsoft and the world of Macs and PCs.

In any case, the fact is we just don’t understand the connection between or how the normal human brain develops. What we do know, however, as this movie would have shown, is that most of the gyrification of the human brain occurs before birth. This started at nine weeks before birth. If we are going to learn this, we are probably going to need to learn it prenatally. That’s a great technological challenge.

The next thing I would like to talk about very quickly is fMRI. Functional brain imaging really started now two decades ago with this work from Jack Belliveau published in Science. But, of course, since that time has really evolved. I’ll give you just one example to give you a state of the current technology.

Let’s say that we know there is a direct relationship between your primary visual cortex, the part of your brain that first processes visual images, and your retina, the back of your eye, what it’s seeing.

Let’s say we wanted to imprint this letter M on the back of your cortex. We would have to show your retina an M. It would be slightly distorted because there is kind of a warping between the retina and the brain but there is this simple mapping. We actually did the experiment and we showed subjects a figure that looked like this and we wanted to see whether we could — what we would see in the brain.

When you first look at the brain with fMRI you typically see pictures like this, grayscale anatomy. The bright spots are the part of the brain that turned on. It doesn’t really look like much of anything. If you look at it on the folded surface of the brain, again you almost get the sense that there may be something there of interest.

But hopefully this next slide will show when you actually look at it on the surface when you blow up the brain. Basically take the raisin and turn it back into the grape. There’s our M right on the back of the brain. The point of this is that we have a direct way to visualize what you are seeing. Of course, because we know that the visual cortex activates when we imagine visual scenes, we know it activates when we’re dreaming, it raises the possibility that we can actually see what others are thinking or dreaming. We’re not there but we’re moving in those directions.

Another element of functional imaging that is extremely interesting and I think relevant for some of the discussions is this notion of resting state correlations. There was a great discovery made a little more than a decade ago by Bharat Biswal and Jim Hyde that if you look at any part of the brain and ask the question how does the kind of noisy signal that you get look when you just image the brain at rest — no task, nothing being seen, nothing being listened to, no cognitive test being set upon — and say what parts of the brain have noise that looks the same as the noise that we see in the part that we’re interested in. It turns out that a whole series of areas show those same areas, show connections based on this correlation between these signals.

The result is that we actually can see even when the brain is at rest multiple networks of the brain. In other words, parts of the brain that are talking to each other all the time. The motor system was originally shown in that original study, now actually 15 years old. Memory circuits were shown.

This is another way that we can think about connectivity. Not direct wiring diagrams but which parts of the brain are communicating with each other and whether it’s mnemonic networks that Randy Buckner showed, the so-called default mode — that’s what your brain is doing when you say you’re not doing anything when you’re just kind of resting or spacing out. That may be the case with a number of people in the room at this point. It turns out these parts of the brain are actively communicating with each other as we’re daydreaming. We can now see those, and this, I think, has implications because now we don’t need cooperation from our subjects to do particular tasks. It’s also seen in diminished cognitive states like general anesthesia and so has important implications for the kind of state of consciousness issues that Dr. Collins talked about.

We’ve talked a little bit about identifying the nodes of these connections with diffusion tractography, identifying how these connections work, whether these nodes are functionally connected. We can do that with resting state and functional MRI data. The challenge and the next domain is directionality of the connections.

Can we understand how information flows within the brain? Of course, thoughts are fleeting and these things happen over millisecond time frames and for this we use other technologies and bring them to bear together with magnetic resonance imaging.

For this question of timing the ones that we tend to use are things called magnetoencephalography, kind of the more expensive cousin to EEG which may be more familiar to you.

These are ways that we can sense what is happening in the brain in the time frame of milliseconds but we don’t have the spatial resolution that we have with tools like magnetic resonance imaging to know where exactly in the brain it’s coming from.

However, technologically the power comes from combining the information from all of these tools so we get the electromagnetic signals, the high temporal resolution from MEG and EEG, high spatial and functional resolution from anatomical and functional MRI. The result of that is that we can actually generate real-time movies of brain activity. We can see, for example, on a patient with an epileptic spike moving across the brain we can now watch that happening in time frames of milliseconds and then we can relate that to say what we see in the wiring diagram of the brain to understand how a spike in one area can then disseminate across the brain. This is where the technology is going.

Again, if there is any chance to kind of click on this and get that going we’ll give that a try and see if the movie is linked. No such luck so we won’t be able to see this movie in real time.

The point of this was that there is kind of a complex orchestration of activity across the brain that all occurs in time frames of tens to hundreds of milliseconds. Much more than we understand. Very powerful tools but very complex. Even simple tasks like looking at a simple word shows this complex dance of interactions.

So bringing these tools together, functional imaging, diffusion tractography, and tools and the time domain basically give us kind of three different looks at this human connectome and, indeed, Dr. Collins and his colleagues at the NIH have now funded a large initiative to try to pull all these together. We’ll have this data on normal volunteers from thousands of subjects within the next few years. This then sets the stage where we can compare this to what we’ll see in abnormal brain circuits and we’ll have this data in a way we never have before.

Just very quickly, two other quick technologies. Molecular imaging, the way to see the neurochemistry in the brain, is typically through positron emission tomography, kind of two different views of your brain on drugs.

Volkow, director of NIDA, looking at dopamine transporters using positron emission tomography. Hans Breiter and colleagues looking at the action of a dopaminergic drug, in this case cocaine, on the brain of a cocaine addict. These are very complementary technologies.

The most important difference between them and relevant for here is that PET imaging allows us to look at tracer level doses, very low concentrations of chemicals which gives us a much wider range of neurochemicals that we can interrogate in the brain, a list that I won’t bother to go through but essentially any organic molecule that has a carbon or oxygen or nitrogen in it is a potential target for a PET scan.

Here is a recent New York Times and Dr. Collins alluded to this. A new PET tracer that detects the buildup of amyloid in the brain and is assumed to be a diagnostic tool for Alzheimer’s disease. Scientifically we can look at how the development of amyloid buildup relates to the functional networks in the brain that we can see with MR.

Of course, it poses this very interesting question that when we see amyloid buildup it might well be in a patient with memory defects and then we would call that Alzheimer’s disease. It might be in somebody with early cognitive changes.

We might see the same pattern in what we call mild cognitive impairment and that suggests that this patient will go on to develop Alzheimer’s. Or we now know that we see this in people that are cognitively normal. In fact, the data clearly shows that in a fairly significant portion of the population amyloid does build up. This is probably predictive to some extent but not, of course, 100 percent predictive of the future development of Alzheimer’s but this may happen a decade or more before the development of the disease and Dr. Collins alluded to the ethical issues there.

And essentially is the issue of the therapeutic gap that Ronald Green talked about in the briefing article that he showed us that the scanning technologies will improve our ability to detect pathology but probably will advance more quickly than those to cure the disease.

Finally, just one final technology, optical imaging. Light does get through your head the same way you can see the red light through my finger here. Hopefully you don’t see too much when I do this but it’s getting through and we can use that as another way to image the brain.

Where I think that is extremely important is in these little guys, subjects that won’t sit still through MR scans but will happily sit on mom’s lap and play some games. Here is a great single example of that, the development of so-called object permanence, the ability of children not only to focus on an object but stay focused on it when it’s put out of their field of view.

Up until about nine months of age children will engage with an object but lose that engagement when it’s taken away from them. Around about nine months of age when you play this same game they try to bat away the screen. They remember the object’s there. Indeed, it turns out that optically people were able to show that the frontal cortex when they developed object permanence showed sustained increased activity. Look at these same children a month before, just in a four-week period, a completely different picture.

We will be able to do developmental neurobiology in a way we haven’t before with these tools. This was the issue of potentially remote sensing. We are not there yet with infrared technology but conceivably could be. So the future is higher field magnets, even more detailed resolution which will certainly push it.

The potential for portable imaging systems. These are just our detector systems that were shown off the other day. There are people actually building imaging systems that may not look much bigger than this.

The ability to integrate tools like PET scanning and MR scanning to allow us in a single session to be able to look at molecular markers, anatomical markers, functional markers all in a single diagnostic setting. These are all the tools that are kind of on our path.

Finally, just to wrap up, this notion of biology and psychology, whether they are the same or different. I always thought of psychology as a branch of biology but perhaps that may not be everyone’s perspective.

I’ll leave with this thought. Similar quotes came in our scoping and our briefing material, and that is that the means for interpreting the information still lags far behind our ability to acquire the images. We’re getting a lot of data. We don’t always know what it means. Like genetics we are going to be in a period of a decade or more, probably many decades, trying to sort it all out.

With that, thank you very much.

DR. WAGNER:
Thank you.

DR. GUTMANN:
Thank you.

DR. WAGNER:
We’re going to group questions together so, Jim, as soon as they get your stuff mounted, we’ll let you go.

DR. EVANS:
Okay. So Bruce and I will have to talk offline about whose field has more potential to fundamentally undermine Western civilization.

It’s a real privilege to be here. Thank you very much. Before I launch into DNA sequencing and what it’s good for, I think it’s probably appropriate to get everybody up to speed on really what DNA is and why sequencing might be of potential interest.

DNA is deceptively simple. It really has only two jobs in the living organism. It serves as a store of information ensuring that our information is passed to each new cell upon division and likewise passed to the next generation and it directs the synthesis of proteins which are vital to carrying out all of the functions of a living organism.

DNA structure, the famous double helix, really contains the explanations for how it accomplishes both of those jobs. As a store of information, it can accomplish that because of the particular pairing that occurs. You can think of the DNA double helix as simply two strands wrapped around one another and it’s held together by specific bonding between the base pairs, those little compounds that hang off of each strand. What we know is that A always binds to T, adenine to thymine. They fit together like a lock and a key. In the same way, any time you find a C on one of those strands, you find a G on the other strand.

Really it’s this fact, the fact that each DNA molecule contains a mirror copy of itself that enables it to be a store of information because what can happen then is you simply separate those two strands. One can go to each cell and the cell can automatically recapitulate the other strand because it knows if it sees a G there to make a C there. If it sees a T there, to make an A there.

The DNA structure also shows us how it directs the synthesis of proteins and that simply is carried out along the other axis really of DNA. It’s the order of these bases that provides the code for the instructions for protein synthesis. Your DNA molecule really is a code. That’s not a metaphor. It’s a digital code and it’s the order of those bases that provides the instructions for making all of these different proteins that are necessary for our bodies. One stretch of DNA directs the synthesis of one type of protein. We call such a stretch of DNA a gene.

For example, if we walk along the DNA strand here, what we can see is that we’ll find a gene, in this case, which encodes collagen. Collagen is simply a structural protein that is very important in knitting our bodies together. For example, in giving our skin the integrity it requires.

We walk along a little farther and we come to another gene. This is the globin gene which binds with oxygen and allows our red blood cells to carry oxygen to our body. In one particularly remarkable example, we walk along farther and we get to a gene that encodes a protein called rhodopsin which is in our retinas.

The way you see me right now is that photons are bouncing off of me. They are impinging on your retina and that particular protein, when it’s struck by a single photon of light, changes its shape and, thereby, through other proteins sends signals that allow you to see.

So DNA sequencing then is simply the elucidation of the order of bases in an organism’s DNA. You have about three billion bases arrayed in a unique order and about 20,000 genes within that DNA. It’s the unique order of your bases that greatly influences your health. For example, what diseases you are more or less prone to, how you might react to certain medications.

A human genome, the DNA that encodes you, along with all the rest of the probably junk DNA and the DNA we don’t understand is about six feet and is coiled up into each of your 100 million cells.

Here’s an example, a little bit blurry on this view, of a human DNA sequence. Each of those is an A, T, G, or C and this represents one millionth of the human genome. If we ordered a pizza and I went through such a slide every second, we would still be here on March 11 marching through one single human genome. You can begin to imagine the scale of data that is contained in each of your cells.

Our DNA, our genome, is interspersed with genes. I have highlighted snippets of genes here. Importantly there are polymorphisms which differ between people. You are identical in most of your DNA with the person sitting next to you. About every thousand base pairs there is a polymorphic site.

That is, simply meaning that you might have a G there while your neighbor has a T. Some of those are totally innocuous but some of those are important influencing, for example, traits. It’s such a polymorphism that gives us blue eyes or brown eyes. Sometimes those polymorphisms, if I have an A here and you have a T there, we may be at different risks for various diseases.

So sequencing DNA then was of great interest to biologists and early techniques were developed in the 1970s. There are now a variety of approaches but I’ll show you one of the most conceptually straightforward approaches and it’s really eloquent. It’s called pyrosequencing.

What it takes advantage of is that under the right circumstances one flash of light, a single photon, will be released when a DNA base is incorporated into a growing chain. In this example if we throw in A as a chemical and this chain is trying to synthesize going forward, it’s not going to do anything because there are Gs here and we know it has to be a C that comes next.

We wash that out and we throw in C and suddenly we get two flashes of light detected by a very sensitive detector. We, therefore, know that there are two Cs that come next. When we wash that out and throw in Gs nothing will happen because they are not going to get any synthesis. But when we throw in Ts, we’ll get two more flashes of light. Thereby we know then that the sequence we’ve just sequenced goes CC TT and so on.

There are techniques now that are seeking to actually physically inspect DNA molecules not relying on synthesis that would simply be able to read out A, T, G, C along that strand. The biggest limitation to all of these sequencing technologies is that the genome is huge.

Carrying out these reactions for an entire genome and going A, T, G, C is unbelievably slow and expensive. That is where the real revolution in sequencing technology has occurred with next generation sequencing. What it takes advantage of is massive miniaturization to engage in parallel analysis, a million-fold analysis.

What you are essentially doing is instead of carrying out that reaction in a single test tube you’ve got a grid here with about 10 million tiny wells and you have those sequencing reactions going on in each well at the same time. Thereby, you can, using very sophisticated computer analysis, assemble huge amounts of DNA information.

This accelerating technology and the plummeting cost has only one corollary that I’m aware of and that’s consumer electronics. The geeks in the room will recognize this as the very first PC. It was brought out in 1977. It’s the Tandy Commodore PET 2001. It cost about $3,000 in those days. Of course, our smart phones now have about eight million times the memory of that original PC. Gene sequencing has gone in the same direction. You saw a similar plot that Francis showed you. It’s even beating Moore’s Law now in the last few years.

So what is the long-term promise then of genomics? Well, we’ve seen this from Francis. There’s been an avalanche of genome-wide association studies. Now whole genome sequencing is a practical reality.

These kinds of studies will undoubtedly shed light on the genetic underpinnings of every disease imaginable and they will ultimately transform medical science. It’s extremely important as we think about application to remember that medical science is not the same thing as medical practice.

The fundamental challenges that we face as we begin to apply it oftentimes boil down to this simple fact: medical science certainly is the indispensable foundation of medical practice but it’s far more complex. There are far more variables. Individual values matter and they differ between people.

Theory alone is insufficient to guide practice. We can’t just think something is a good idea and implement it in our patients. We need evidence. The timeline for translation is long and successful translation into practice is not guaranteed by scientific understanding. This is most heartbreakingly demonstrated in my mind by the case of sickle cell anemia.

We have understood the molecular basis of sickle cell anemia since before the elucidation of the structure of the double helix and, yet, when patients come into the emergency room at UNC and we treat them for a sickle cell crisis, we treat them in much the same way as we did 40 years ago.

It’s also far more expensive and the stakes are much higher in medical practice because we can do active harm to people. Where does the clinical promise of genomics lie? I would say that it’s not in simply assessing risk.

Common diseases have many factors of which genetics is only one. Thus, the bulk of any individual’s genetic risk for any common disease is by definition modest. Therefore, assessing its risk only nudges our estimation slightly.

Moreover, there are very few data to suggest that knowledge of one’s genomic status for common disease is useful or effective in changing behavior so I’m not a big optimist on that front.

When one starts to look at a powerful technology like whole genome sequencing and what are its uses, we oftentimes hear the adage, hear it derided, justifiably sometimes, as when you have a hammer, everything looks like a nail. That is certainly true.

We shouldn’t indiscriminately wield this hammer of next generation sequencing. The proper question then is what’s the right nail for sequencing technology? I think there are two nails that are ready to be hit by this technology.

One is a diagnostic tool in enigmatic patients, and another is as a public health tool to identify those at dramatically increased risk of preventable disease. Not 20 percent increased risk of prostate cancer or heart disease but dramatic risk. I want to give you two examples.

Here it is as a diagnostic tool. We have a 47-year-old woman who suddenly collapses with cardiac arrest. She’s fortunately resuscitated successfully but her EKG reveals what’s called long QT syndrome, a propensity on either no stimulus or minor stimuli, like an alarm clock going off, to be triggered into a lethal dysrhythmia.

There are dozens of genes implicated in this disorder. With the old technologies we were unable to find the mutations responsible for it. But by applying next generation sequencing, we can simply look at the dozens of genes, identify her mutation, guide her therapy, and guide the therapy of her family members as well because, of course, these individuals are now all at risk for sudden death and you would really like to know that in order to prevent it.

In addition, next generation sequencing if properly implemented could be a potent public health tool. About .25 percent of U.S. women; that is, almost 400,000 U.S. women carry a mutation in BRCA1 or 2 placing them at extremely high risk of breast and ovarian cancer with an 85 percent lifetime risk of breast and a 25 to 50 percent lifetime risk of ovarian cancer.

Critically, knowledge of this risk allows preventative measures. Currently we can only identify such women once several family members have developed cancer which is kind of too late. Right? Next generation sequencing allows population screening for high-risk preventable disorders.

When you add up such high-risk genes, a rough calculation would be that one to two percent of the population in the U.S. carry such mutations. That would be three to six million individuals in the U.S. with very preventable diseases if those disorders are identified.

So what are the challenges to harnessing this? Accuracy is one of them. Next generation sequencing is in one context highly accurate. 99.99 percent sounds pretty good. But when you multiply that error rate by 3 billion nucleotides, that means 300,000 errors per patient and we have to grapple with that.

Interpreting the variance is, to put it mildly, not always straightforward. Storage and access in the medical record is critical, education of the patients and public, and issues of consent and reporting, as well as education of providers.

I want to touch on one of the major challenges that Francis mentioned, and that’s incidental information. Upon whole genome sequencing there are many things that we are looking for but we will also find many things we were not looking for and can do nothing about.

Some of those are trivial or, indeed, beneficial like variance that guide the use of certain medications. Some are problematic. Do you want to know whether you’re at a modestly increased risk for Alzheimer’s disease when there is nothing you can do about it?

More the point, though, we will occasionally discover truly disturbing information, lethal, untreatable, late onset conditions for which we have absolutely no intervention. Most people who know they are at risk for those things don’t want to know.

Some wish to know this information and others don’t and we have to balance that fine line between paternalism, as Francis discussed, and giving patients the autonomy they desire. We have to grapple with how to inform them of such information. We have to protect them from harm but also allow for choice.

I want to finish up with a plea to basically eschew this concept of genetic exceptionalism. I actually am optimistic that we can surmount the challenges and implement next generation sequencing in a clinically responsible way. I think that many of the ways that we will do that is simply by remembering that we have faced many of these problems before.

In the research arena DNA is different in the sense that it is a uniquely identifiable substance. In the clinical area I would submit that there is virtually nothing exceptional about genetic information. We hear, for example, that genetic tests affect others who haven’t been chosen to be tested. Well, the medical community has dealt with that for a long time in the realm of infectious disease.

That DNA provides probabilistic information to the asymptomatic is sometimes pointed out as something unique but it’s far from unique. When you get your cholesterol measured, that is exactly what’s going on. Someone is measuring something that has probabilistic information in an effort to prevent disease.

Our genome can’t be changed. Well, that is certainly true but neither can much of what we discover medically so that’s hardly novel. Insurance discrimination is actually better for genetics than the rest of medicine. It’s a lot easier to get health insurance if you have a BRCA1 mutation than if you have breast cancer because of GINA. This required a solution. It was not true before GINA.

Unexpected results, false positives, false negatives are certainly the norm in genetics but I would argue that they are also the norm in all parts of clinical care. This is a mammogram. Most, by far the majority, of abnormal findings on mammograms that require further evaluation are false positives.

Finally, this issue of DNA being uniquely identifiable. While important in the research realm, I would submit that we deal in medical information with uniquely identifiable information all the time. My zip code, my date of birth and my spouse’s first name I’m sure is in my medical record. That’s uniquely identifiable just like my DNA sequence. We have to protect DNA information just like we protect other pieces of medical information.

My last slide I just try to sum up the challenges to realizing genomic medicine. We need to create an evidence-based process to define what is significant and what is not significant in the human genome. We need a health-oriented phenotypic annotation. That is, we need to know which of those variants are associated with what diseases and in what way.

We have to somehow involve patients in this process of shared decision making along with providers using a range of technologies and we have to understand the ethical dimensions, patient preferences, and differing values that are going to dictate who wants what kind of information out of their genome.

We have to maintain a sober focus on evidence so I’ll end with this quote from Hippocrates which I think sums up the idea of evidence-based medicine better than anything in the subsequent 2,000 years. “Life is short, the art is long.” That is, our opportunity is very fleeting. Medicine is a big field and our view of it is very narrow.

“Experience delusive.” That is so true in medicine. Right? We can get fooled by our experience if we are not careful, and certainly judgment is difficult. I will end there. I had a cartoon at the end but in the interest of time I’ll skip it.

DR. WAGNER:
I think we should take a question or two on this subject. Someone on the panel has one. I’m sorry. Nelson.

DR. MICHAEL:
A quick question. I think I might be in the position of agreeing with my MGH colleague that you will bring down Western civilization quicker. That’s coming from the background of a physician and also a microbiologist, which is informational precision.

All of the issues that you both described, I guess I’m seduced by the simple language of DNA in terms of all the imponderables that come from seducing three billion bases understood.

A lot of the very exciting technologies that you showed us I think are probably a bit more difficult to categorize. How do you deal with that knowing that both fields must interact with each other? They already do.

How can one find commonality between the more — I would wager, maybe this is provocative — the more precise information that comes from the identification of the linear sequence of nucleotides versus all of the wonderful functional and anatomical data that you show from neuroimaging?

DR. EVANS:
I think the only way to tackle that is through cross-disciplinary dialogue which there is not much of right now. I think we really need to foster ways of figuring out how genetics informs neuroimaging which is probably the direction it will go.

I think part of the problem is we speak different languages so it means we have to bring people together to do that. I don’t think there are easy answers.

DR. ROSEN:
No. There are certainly efforts afoot. For example, in that Human Connectome project, of course, the genetic data will also be acquired. There are now groups that are collecting upwards of 10,000 subjects with genetic data and some of this functional resting state data.

We are kind of just beginning to get to the size of populations where you can begin to look at the correlations in a meaningful way. Most studies, you’re right, to date have been on much smaller subsets and as a result are much less informative.

DR. EVANS:
And I would also mention that, of course, your DNA code is essentially static. It’s what you had at the moment of conception. Whereas obviously your neuroimaging is an extraordinarily dynamic thing.

As we begin to understand epigenetics, that is, how the environment in a dynamic way affects the genome, there will be more opportunities, I think, for this diffusion of genetics and imaging.

DR. ROSEN:
And that certainly seems to be extremely relevant for the brain and behavior where the epigenetic factors may be at least as important and maybe much more so in some ways.

DR. WAGNER:
Is there a question from the audience? We need to give an opportunity for that.

Please approach the microphone.

PARTICIPANT:
I have a question for Dr. Rosen. Pardon my layman’s terms but you quickly went through slides on electromagnetic neuro — was it neuroimaging? And you mentioned remote access and covert access. Could you explain that a little bit more, please?

DR. ROSEN:
It was a comment made — excuse me for turning away to answer your question but I’ll answer it. It was a reference made in passing to the infrared technologies. We don’t today know how to do things like functional magnetic resonance imaging or EEG and MEG kind of spatiotemporal imaging remotely, much less covertly.

We don’t know how to do that either with technology like infrared. Of all the technologies that is the one that might have the greatest potential at some point in the future to be able to be done covertly. It’s certainly not a technology that we have in hand today. But when you talk about the possible, that may be possible in the not too too distant future.

DR. WAGNER:
We are really tight. Steve, could I ask you to be very, very brief on your question?

DR. HAUSER:
Yes. Bruce, a quick follow-up also. You showed us beautifully how one can use these technologies to map a primary sensation. In your example the letter M visually mapped onto the occipital cortex. It’s a leap to go from mapping a primary sensation, I would think, to mapping more complex cognitive processes. Could you just comment on our current capability in that more difficult arena?

DR. ROSEN:
Sure. Just very briefly. Of course, that is exactly right. Those are the best case scenarios. I show them mostly to give you a sense for what both the technology is capable of and also what the brain is capable of and the ability of it to give signals in a very precisely registered way is certainly there.

When we talk about more complex behaviors, attention, language, etc., we know that these are not single functions that are being addressed in a single area but they are distributed functions. The opportunities are much more challenging.

This is what the field of cognitive neuroscience is all about is trying to pin down how the orchestration of information across multiple areas subserves these more complex functions. It’s a lot of work but I’m not pessimistic that we’ll continue to make progress.

DR. WAGNER:
Thank you both very much. We will need to take just the briefest of breaks and return here if we can in five minutes. Professor Evans and Professor Rosen, thank you both.