Transcript, Meeting 15 Session 1

Date

December 18, 2013

Location

Washington D.C.

Presenters

Amy Gutmann, Ph.D.
Commission Chair
 
James Wagner, Ph.D.
Commission Vice Chair
 
Mildred Z. Solomon, Ed.D.
President, The Hastings Center
Clinical Professor of Anaesthesia
Harvard Medical School
 
Steven E. Hyman, M.D.
Founding President
International Neuroethics Society
Director, Stanley Center for Psychiatric Research
Broad Institute of Massachusetts Institute of Technology and Harvard University
 
Pat Levitt, Ph.D.
Chair-Elect, Neuroscience Section
American Association for the Advancement of Science
Provost Professor, Department of Pediatrics
W.M. Keck Chair in Neurogenetics
Keck School of Medicine,University of Southern California
Director, Program in Developmental Neurogenetics, Institute for the Developing Mind
Children’s Hospital Los Angeles
 
Paul Root Wolpe, Ph.D.
Director, Center for Ethics
Asa Griggs Candler Professor of Bioethics
Emory University

Transcript

DR. GUTMANN: Good morning, everybody. If I could ask everyone to take his or her seat.

I'm Amy Gutmann, I'm President of the University of Pennsylvania and I'm chair of the Presidential Commission for the Study of Bioethical Issues. I welcome you all and Happy Holidays to everybody, too.

On behalf of myself and our vice chair, President James Wagner, Jim is President of Emory University, we welcome you. This is our fifteenth meeting of the Bioethics Commission and we have a very important charge that we're moving forward today. But before we continue, I want to note the presence of our designated federal official who is our Bioethics Commission Executive Director, Lisa Lee. Lisa, would you stand up so everybody can recognize you?

I'd also like to ask the Bioethics Commission members to introduce themselves. Barbara, why don't you begin. And put on -- just so you all know, put your red light on and when you're done, tap it off because sometimes they all can't be on at the same time.

DR. ATKINSON: Hi. I'm Barbara Atkinson, I'm the retired executive vice chancellor and executive dean at the University of Kansas School of Medicine.

DR. SULMASY: Dan Sulmasy, Department of Medicine, Divinity School and MacLean Center for Clinical Medical Ethics at the University of Chicago.

DR. ALLEN: Hi, I'm Anita Allen, I'm vice president for faculty at the University of Pennsylvania where I'm also a professor of law and philosophy.

DR. ARRAS: I'm John Arras, I teach public health ethics and philosophy at the University of Virginia.

DR. GRADY: Good morning. Christine Grady, from the Department of Bioethics at the National Institute of Health Clinical Center.

DR. KUCHERLAPATI: Raju Kucherlapati, Department of Genetics and Medicine at Harvard Medical School.

DR. FARAHANY: Nita Farahany, Professor of Law and Philosophy and Genome Science and Policy at Duke University.

DR. HAUSER: Steve Hauser, Department of Neurology and Research, San Francisco.

DR. GUTMANN: Thank you.

Today we're continuing our work on neuroscience and related ethical issues in response to the charge we received from President Obama in July, launching the BRAIN Initiative. As we engage in discussion and deliberation of this topic, I want to emphasize that, while we received our charge as part of the President's BRAIN Initiative, our focus is actually wider than the initiative. President Obama asked us to review the considerations of neuroscience more broadly, considerations of both neuroscience research and the application of neuroscience research findings. Our overarching goal and a high expectation is that the Bioethics Commission will make ethical and practical recommendations that will inform the conduct of the BRAIN Initiative, but have a life that goes beyond the BRAIN Initiative.

It is important that everyone also recognize that our Bioethics Commission is not tasked to review institutional research protocols. I say that because we get requests to do that, and we are not supposed to that, indeed we're not allowed to do that. We are not a large IRB. We will consider how to best integrate ethics into neuroscience broadly, and the BRAIN Initiative specifically we will contribute to. But this Bioethics Commission will not be the ultimate locus of that integration. We will recommend how moving forward that integration should go.

Before we get started, I want to take just a moment, as I do every meeting, to explain how we will take public comments. At the registration table out front there are comment cards. Commission staff members here have comment cards as well, and they're all wearing name badges, but I'd like them all to stand up so we can -- everyone can see.

Any comments you have, if you want to take a card, just find a Commission staff member and they will be happy to help you. Write down any comments you have on one of these cards, hand it to any Commission staff member, the staff member will give me the cards throughout the session and, time permitting, either Jim or I will read them out loud. All comments, whether read out loud here today or not, are reviewed and logged as public input. We read them all and we thank you in advance for participating in our meeting.

And now I'd like to ask Jim Wagner to say a few words.

DR. WAGNER: And just a few words. First, welcome to Commissioners, to our expert guests and to the audience. And that's such a good point, this card system is such an improvement over the old stand up microphone system because we always squeeze in so many open questions during one of these sessions. But with the cards, we know that everybody's question can be reviewed and addressed even beyond the time of meeting.

I'd like to reinforce the great opportunity that this task gives us relative to others. We've mentioned this at a prior meeting as well, but it's important. And I think we want to encourage the White House and Cabinet that often gives us assignments to the advantage of being anticipatory. Getting engaged, getting ethics engaged early in projects like this rather than

reactive. That allows us, rather than just distilling lessons learned to help us integrate ethics from the beginning in a way that enhances the value of the science that derives from important work, in this case of the BRAIN Initiative.

And of course, in neuroscience, many imagine that it's a special critical area in which to apply bioethics, particularly brain-focused neuroscience research, where for many of you, both being and biology appear to intersect there. So it's a great project that we've been engaged with. It's a delight to be engaged early and a delight to be engaged with each of our fellow commissioners on this. Welcome to all.

DR. GUTMANN: Great. Thank you, Jim.

We're going to begin with a panel that focuses on integrating ethics and neuroscience through education. As many of you know, the Bioethics Commission is committed to improving bioethics education across disciplines and educational settings. And we have already contributed to that in other realms outside of neuroscience with pedagogical materials. There's a wide range of education that not only we can but we hope will happen, and we have a great panel today.

We actually have four people on the panel today, but one of them got stuck in Boston. So Steve Hyman, are you on the phone?

DR. HYMAN: I am. And I would just say for the first time, I understand why some people want to live in the sun belt.

(Laughter.)

DR. GUTMANN: Right. Even Washington and Philadelphia are better. But we are sorry you can't be with us, but we'll ask Steve to contribute remotely. You sound like the voice of God coming out of there, Steve, so you'll have a very high bar to --

DR. HYMAN: That's a moral hazard for me.

DR. GUTMANN: Right.

We'll begin by hearing from Dr. Mildred Solomon. Dr. Solomon is President of the Hastings Center and Clinical Professor of Anesthesia at Harvard Medical School. She directs the school's fellowship in medical ethics.

Prior to assuming leadership of the Hastings Center, Dr. Solomon was senior director of implementation science at the Association of American Medical Colleges, and before that she was Vice President of the Education Development Center. Dr. Solomon has founded or led many educational programs in bioethics, including a project funded by the National Institutes

of Health that developed and pilot tested a high school bioethics curriculum. She has served on committees of the National Academies of Science, was a member of the U.S. Secretary of Health and Human Services Advisory Committee on organ transplantation, consults to numerous foundations and government agencies.

And I have to say that we are very pleased to welcome you, Dr. Solomon.

DR. SOLOMON: Thank you, very much.

I'm very appreciative of this opportunity to address the Commission. And I've taken your charge to me as addressing the question of how should more scientists be educated about the ethical aspects of their work. But I'm going to respond to that question both with respect to neuroscience and also with respect to the preparation of scientists working in other areas of national importance.

The U.S. is a leader in the preparation of scientists. We have an elaborate and high quality infrastructure to produce PhDs in both the basic and applied sciences. U.S. institutions of higher education remain a mecca for training in science and engineering. And yet, our ability to prepare scientists to consider the societal and ethical implications of their work is only in its infancy.

So far, most bioethics education has focused either on the responsible conduct of research which aims to prevent misconduct, which is thankfully a rare event, or on research ethics. I think a continuing focus on research ethics is critical, but we -- so that's a very important goal. But we've done very little in a third area, in helping scientists think about the very compelling questions about how our society should be using and managing its technological prowess.

So that's what I want to focus on in my remarks, this third area. I think we have an opportunity and arguably an obligation to engage scientists in explorations of the ethical and social implications of their work, particularly given the wide array of ethical issues that neuroscience and neurotechnologies will raise. So neuroscientists, particularly, should have the capacity to anticipate the societal impact of their discoveries, and they should be socialized during their training years to see the merit in this.

I'm going to take just a second to pause over that assertion. Why? Why should we focus on scientists? Some will argue that that's not the business of scientists, that we have philosophers and bioethicists and ordinary citizens, and maybe just leaders who should do this. So I'm going to pause and propose four reasons, and I'm sure there are many more that we might want to talk about. But for me, the things that seem urgent are first of all, by definition, scientists are a kind of professional. And by definition, professionals think about the ends of their work. They think about the purposes to which their labor is devoted, and that's what makes

them professional rather than technicians. So almost by definition, this should be a part of scientists' education.

A second reason is that, given the power of science and technology in our society, bioethics education can cultivate humility and self-reflection. And I think that's an important attitude and stance that we want to see in our scientists. That would mean basic bioethics literacy and familiarity with historical examples of when science has lost its way, like Guatemala or the worldwide eugenics movement. These can be correctives for scientific hubris.

A third reason is that scientists are best positioned to constrain hyperbole by others. They have the credibility to say what both the promise and the limits of their work is. Let's think about this in genomics, for example, I would argue that geneticists are the best people to talk about -- or try to correct statements that overemphasize genomic determinism and under-emphasize the social determinants of behavior. So likewise, neuroscientists are best positioned to clarify that just because there are locations in the brain that are activated during neural problem solving, that doesn't mean that education and social role modeling play no role in how humans develop the ability to reason morally.

A third reason is that bioethicists and scientists need one another. They need to do this together. Normative analysis cannot proceed without an accurate understanding of the research and technology and under examination. It's just obvious that without good knowledge of the science and the technologies, ethical critiques can make two kinds of errors. They can overestimate the likely harms, but they can also miss problems. So there needs to be close engagement.

And the opposite is true, too. Scientists can't really do that close engagement and engage with bioethicists unless they've developed the ability to discriminate between ethical and normative questions or help to cultivate the imagination to anticipate the effects of their work.

And then the fifth reason I'll put forward is that the very diversity of our nation, of our pluralism requires that we develop a shared secular way to examine moral questions. This is particularly true for two reasons. There's an increasing number of young post-doctorate researchers who have been foreign-trained and come with widely different religious and cultural perspectives.

And the second reason is that neuroscience itself raises fundamental questions about what it means to be human and our concepts of responsibility, blame and intentionality. These are issues that religions of the world have addressed for millennia and therefore we need to have a secular way that people coming from these other sectors can talk about these issues.

In the time remaining, I'm going to try to address a set of quick questions about, okay, these are the reasons why. But what kind of bioethics should we aim for? And I want to

introduce a phrase. I'm going to argue that we want transformational learning. So what do I mean by transformational learning? Transformational learning is not just about cognitive learning, it's not just about critical analytics skills, which of course are very, very important. But also about habits of mind, attitudes and dispositions.

Learning is transformational when it is not only about acquiring content, but it's about -- it changes the learner in some kind of profound way. And so in my view, we should be designing learning experiences likely to result in scientists who are more fully engaged as reflective, thoughtful, and deliberative persons, who are affected as persons with the habits of mind and dispositions that we want to cultivate.

I'm also using the word "learning." Notice I'm avoiding the word "training." The habits of mind, attitudes, and dispositions that bioethics cultivates are not capacities that can be trained into people. But I do think they can be cultivated. And I've also avoided the term "education." Sometimes it crops up but we should be more interested in designing for learning -- designing for learning than in educating. Educating is too uni-directional, it implies that someone is doing something to someone else. It assumes an expert operating on a novice.

Learning on the other hand is far more active. Learning is something you do for yourself. And you see in the suggestions that I'm going to end with, they are all about enabling people to learn from themselves in a very -- in a virtual cycle of asking questions, seeking answers with colleagues and peers and asking more questions.

So transformational learning is self-directive, but it is also very much shaped by -- it's socially constructed and shaped by peers in very unconscious ways that have to do with how early career professionals are socialized. And again, any suggestions I'm going to make, the process of socialization and the social construction of knowledge is a theme that you'll see.

Learning for whom? I don't think all scientists need to develop the same levels of expertise, just as we expect all scientists to understand probabilistic reasoning and statistics. We don't expect them all to be statisticians. So in my written remarks, which I'm going to -- I know I'm going to have to refer people to, I've talked about a two-tiered approach. Basic bioethics for Ph.D.s and post-graduates and deeper engagement for those who need to go further.

And what would some strategies be? My written remarks talk about six areas that we could do for deeper engagement. An ELSI program for brain science. The establishment of a learning community for all the scientists across the country involved in brain science. An annual symposium. A bioethics intensive training workshop experience for scientists. A survey that can be used in multiple ways, and I'm happy to describe under questions and answers. Encouraging the brain awardee institutions to build ethics capacities to address normative questions right into their grant work. So I'm happy to describe those in more detail in the question and answers.

I have 14 seconds left. Here's the summary. Brain science is an important context. Education should be challenging, and it will be challenging because there's so many disciplines. Three areas; let's not train or even education but rather design for transformational learning.

DR. GUTMANN: Thank you very much. And your list is going to be very helpful to us. So it will -- we will read your remarks, but also follow up on that.

Our second speaker is Dr. Stephen Hyman who is director of the Stanley Center for Psychiatric Research at the Broad Institute of Harvard University and MIT, and Distinguished Service Professor at Harvard. Dr. Hyman is the chair of the Institute of Medicine Forum on Neuroscience and Nervous System Disorders, a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science and a member of the American College of Neuropsychopharmacology.

Dr. Hyman also served as director of the National Institute of Mental Health and the first faculty director of Harvard University's Mind, Brain and Behavior Institute. He's founding president of the International Neuroethics Society and editor of the Annual Review of Neuroscience. And I was expecting, Steve, to say thank you for joining us at this meeting, physically, but thank you for joining us virtually. And we look forward to hearing your comments.

DR. HYMAN: Well, thank you very much.

And I have to say, not only did I not get to join you physically but I still have to wake up at 4:00 in the morning to try to get there. So I see no justice in this snow. But that's another matter.

I want to take off from Mildred Solomon's presentation, in fact, I was scratching out some things I was going to say so that I could highlight areas of agreement and some modest areas of difference, not really disagreement.

I think what Mildred Solomon said that I had intended to start with is that I very much agree with the need to engage scientists as a community. I find that the young scientists who I've been teaching for decades often are so deeply engaged in the intellectual and technical aspects of their work that they often do not find themselves able to lift their heads to engage in either ethical or policy reflections, or for that matter to read a novel.

Moreover, many -- not all, but many scientists engage in the profession as young people do with any profession, believing that everything they're working on has largely beneficent uses. Mildred used the term "transformational learning." That term seems a bit grand for me, but I do agree that teaching ethics while it, of course, needs a cognitive component to

give students categories within which to reflect, is certainly not enough.

Indeed, two days ago a tragically -- in many senses, a Harvard student emailed in a false alarm so that he wouldn't -- for bomb threats so that he wouldn't have to take his final exams, and is now -- has now been arrested.

What I would say with respect to the cognitive aspect of ethics and knowing this kind of Harvard student is I'm sure he could have gotten an A-plus in any standard ethics course, but somehow he made a very problematic choice for himself and for our community. Thus we need both cognitive and emotional engagement.

And from the syllabus that I sent you from the course that I teach at Harvard, you can see there is a lot that is built around case studies. And what you can't see from that is that, in sections, we set up many debates, and I'm happy when the students are red-faced and arguing and really engaged. Because as I've already noted, cognition is important, but cold cognition is hardly enough. Because in addition to ethical frameworks, I always want to achieve first of all curiosity about ethical and policy concerns, and then habits of ethical reflection and worry.

You know, it's -- the curiosity is often overlooked. I think it's very important. We require in NIH grants, for example, and I'm sure there are equivalents from other funding agencies, coursework and the responsible conduct of research, which includes traditional bioethics and issues about research misconduct.

My impression, having taught in such courses and required such courses when I was an NIH Institute director, and now thinking about additional requirements now that there is a data replication concern throughout all medical science, is that many students treat these as a bitter medicine, don't take them as seriously as we would hope. And in part, this attitude is conveyed by some of the principal investigators in whose labs they work because the time when they're learning about human subjects protections, for example, they're not doing experiments.

And so I'm not here to talk about the education of mentors and principal investigators, but I think we know that, just as ethically concerned medical interns and residents can often become negatively socialized in the hurley-burley of a large hospital where people are well meaning but often engaged in dark humor to protect themselves from the difficulties facing them, the same kind of resocialization away from reflection and toward really focusing on the work happens not universally to be sure, but in many venues, and is worth attention separately.

I do have one minor disagreement as a practicing scientist with what was said before, which is scientists do not always think about the purpose of their work, at least not in the deepest sense, and certainly not basic scientists indeed.

Many scientists have the ethic that their job is to uncover as much knowledge as

possible and to take it wherever it leads. And even, in some cases, to produce technologies that may have risky uses. And too often scientists and engineers think that it's somebody else's job to worry about unintended uses with possible negative consequences, and not their job.

And this is precisely where we want to create both ethical reflection but also curiosity about where something might lead. Not to get people to sensor themselves or stop their deep exploration which, in the end, is what science must do. But certainly to think about safeguards outside of the issue of neurobiology. Some of the recent controversies over the publication of sequences of influenza strains that -- and how they could be made more transmissible human-to-human engendered significant argument in the scientific community.

Some thinking that we just had to push the research as far as possible if we're ever going to understand how flu works. And others worried about synthetic biology tools becoming available even to people of mal-intent in their garages. And I think that debate, while decisions have been made about the existing publications, the debate is hardly, hardly finished.

Let me turn to the moral ethics. I think it's important that this is not traditional bioethics, at least as I have tried to define neuroethics, not only for my students but through the society. I have assumed that traditional bioethics can take care of issues of research ethics, human subjects protections and the like, when they apply, for example, to patients with dementia, with cognitive disability, and with mental illness and other traditional well-trodden bioethical issues. They're well-trodden, even if not always matters of absolute consensus.

In fact, I think that neuroethics, besides being necessary as we embark in a new year of discovering technology, it's also extremely interesting to students. Because beyond what is typical of most bioethics curricula, there are very deep philosophical issues raised which the previous speaker actually alluded to.

But just to add to the list in a question of how memory works. Most of us intuitively think that memory is at least nearly veridical, that it is operating like a camcorder, and we often are quite certain that our memories are correct when we're being confronted. And yet scientifically, we know they're not, and what does this -- what does this mean for our status, for example, as moral agents.

What of the issue of human identity, again a major issue for moral status, especially when it comes to our narrative identities. The last speaker touched on the issue of free will and moral agency. Now of course, there's the age-old metaphysical debate engaged long before modern science, for example, by St. Augustine. But as we start to understand actual mechanisms of cognition, emotion, motivation, and become available to tamper with them, either through drugs or neural modulation, I think this will become a much sharper and more important argument.

To give you an example of something like this, and what happens in the classroom when you get students engaged, there are, in the course I teach, both neurobiologists and stem cell biologists or budding stem cell biologists interested in the brain. And there are a lot of experiments now going on in animals, not yet very successful, but going on in animals in which stem cell therapy might be used to repair a degenerating hippocampus thus to restore memory in neuro degenerative disorders.

And last year, among these very bright and able students who are able to pick apart and appropriately criticize the animal models and the methodology of these papers, no one thought about how memories are stored and how success using the technologies as we now imagine them might give your grandparent a perfectly functioning hippocampus. But would over-write all of their memories and identities, at least those that were still hippocampal dependent. And this raises the whole question of, you know, who is it that we're saving with these therapies.

Now once they thought about this, of course, they became very, very engaged and it became a gateway into much deeper discussions. You could imagine the same discussion about deep brain stimulation, both as it exists now and as it's being expanded from treatment of Parkinson's Disease to the treatment of obsessive-compulsive disorder to depression and other psychiatric disorders.

And also the project proposed by DARPA for the treatment of individuals with traumatic brain injury, post-traumatic stress disorder, but also personality disorders. Potentially with deep brain stimulation or other forms of modulation which could touch on our identities and aspects of ourselves that many people, again on reflection, don't even think have a physical basis. Of course, they do, and we're going to have that illustrated.

Let me just end by saying that you might think that an ethics course being offered to science contractors would be preaching to a very small choir. In fact, when engaging issues of the kind that I've just touched on, but you can see in the syllabus, this course at least has been very popular at Harvard. To keep it interactive, I've tried to limit the enrollment to 75, but this year in pre-registration, there were 133 science concentrators who want to take the course. So that makes me optimistic about the ability to engage budding young scientists in hopefully emotionally as well as cognitively engaging ethical reflections.

Thank you.

DR. GUTMANN: Thank you. And I think we will come back to the question of engaging young scientists, scientists at an early stage and truly engaging them in interesting and challenging questions which helps actually get the basics, you know, in there both cognitively and emotionally.

Next we hear from Dr. Pat Levitt, director of the Developmental Neurogenetics Program at the Institute for the Developing Mind, at Children's Hospital, Los Angeles. Dr. Levitt is also Provost Professor at the Department of Pediatrics and Neuroscience, Psychiatry, Psychology and Pharmacy and W.M. Keck Chair in genetics at the Genetic School of Medicine at the University of Southern California.

Dr. Levitt has served as a member of the National Advisory Mental Health Council for the National Institute of Mental Health. He's a member of the DANA Alliance for BRAIN Initiatives, a Fellow of the American Association for the Advancement of Science, Chair-Elect of the Neuroscience Section of the American Association for the Advancement of Science, and a member of the Institute of Medicine.

He served as senior editor for the Journal of Neuroscience and currently serves on the editorial boards of several journals, including Neuron and the Journal of Neurodevelopmental Disorders.

Welcome, Dr. Levitt.

DR. LEVITT: Thank you very much.

I also direct the Ph.D. program in neuroscience at USC.

DR. GUTMANN: You sleep two hours a night, which is not good.

DR. LEVITT: I have a granddaughter who has the same pattern of sleep that I've had for my life, and my son is angry about that.

(Laughter.)

DR. LEVITT: So I'm going to read my remarks because my family reminded me that I rarely speak extemporaneously for less than ten minutes. So thank for the invitation to speak.

And you've already heard, it's interesting, there's a common thread that you'll hear from me, that you've already heard. I'm going to focus in one particular area which I call knowledge transfer and the responsibility, the professional responsibility particularly in neuroscience, based on what I'm going to -- based on my comments. So there are time-honored areas of ethics training, I'm not going to touch upon those. Incidental clinical findings, use of subjects and research, et cetera, every institution has a course in this, some are wonderful, some are boring.

But I think overall, at least from my perspective, there's a greater sense of professional responsibility in conveying what we're doing, the meaning of what we've referred to

as translation of scientific information to the public and policy makers as a focus of ethics training.

So in essence, I think there's a problem in skill development and knowledge transfer in our profession, for scientists in general. And it's not just about using simpler words or shorter sentences, which is what I believed when I started in my public policy work ten years ago. It's what social scientists and linguists and cultural anthropologists address in the context of strategic framing to address the challenges of conveying scientific findings accurately and fairly in the context, which we can't ignore, of cultural and social dominating frames that influence the interpretation and use of our information in society. People come to the table with a belief system, we have to recognize that as scientists and understand how to convey our information in a way that is accurate and fair in terms of how it will be interpreted.

And so the three areas that I think demand greater emphasis for training of students, research fellows and, I'll be honest with you, most faculty with whom I interact. So first there's the ethical challenge of conveying the promises of neuroscience discoveries leading to disease and disorder cures or even improving human life experience.

In 1997, John Buhler who was president of the James McDonald Foundation, wrote a really interesting essay called "Neuroscience and Education, the Bridge Too Far." In that essay, John speaks of the mistaken attempts to use speculation from basic neuroscience findings to apply in the classroom, the education clinic, because the studies were neither designed to inform best practices nor sufficiently vetted scientifically to demonstrate application.

And there was this transition in the 1990s, the decade of the brain, which put "The Bridge Too Far" I think onto our collective plates. There was a sense among some that we continue to expand throughout that decade and now promises un-kept. Neuroscience holds a special place, I think, because I believe that through our capacity to gather an unending amount of information, we will eventually discover the signature patterns of mental and physical states. Ultimately the ability to use the collective data on genomic and brain activities have provided a fingerprint perhaps for illness.

And we place to our students all the time the following questions: If you have the entire genome sequence of an individual, would you know whether or not they have schizophrenia? If you could define every connection in the brain of an individual, would you be able to determine whether that person has major monopolar depression? And granted, trainees and scientists understand and recognize the complexity of the interplay between genes and environment, brain plasticity, a staggering amount of information that would be extracted from the coming generations of technologies, data management and analysis which we're going to hear more about today. But you must understand that the public will have a very different view. That's the reality. And we can't say it's their responsibility, we just put the information out there.

They will not recognize the subtleness of the nature of how we interpret that and draw conclusions when we insert disclaimers, which we do all the time, such as "may," "possibly," "perhaps." Can you recognize and emphasize in your training through example best ethical practices for conveying what we know, how we know it and what one can do and cannot do with this information?

Second, more now than ever, we're embracing the goal of leveraging both animal and human studies for practical application. Since the 1990s, translational science, particular translational neuroscience, has been a core principle from which our science funding agencies have operated. And lately the principle has come under fire and discourse between scientist attacking this as the culprit of our current erosion in funding and publications, journals driven to publish the next New York Times or Washington Post covered article, push the envelope, insisting on the use of terminology, shared nouns to emphasize the importance of the next great discovery. I'm just letting out what the problems are, I don't actually necessarily believe that that is the culprit in this.

We talk to our trainees and junior faculty about paying careful attention to the benefits of what can be studied only in animals and the caveats against unwarranted generalizations and inappropriate applications to human health. For example, is it fair or is it even ethical, is it an ethical question to use the term "autistic mouse" when Canter clearly described the consummation of symptoms in 1943 that come together that we use to define the clinical population, and we know does not coalesce in animals. But it doesn't mean an animal research is not valuable. I do that as part of my laboratory effort. But describing an autistic mouse has an interesting meaning to it.

What are the ethical considerations of describing or promoting one's research in this context, where the ethical considerations in the context of a growing scientific culture, which I strongly believe in which we cherrypick clinical literature to support the relevance of an animal phenotype that may or may not be translatable to the human condition. The biology may be translatable, but the translation to direct impact on human disease and disorder understanding may not necessarily be. That doesn't make the research less valuable, but it's a different way for the public to think about it.

In essence, are we purposely misrepresenting research deliverables to trump the discoveries that provide high science permanency. Journal impact, grant dollars, whatever it might be. And I view this as an ethical issue that we need to address.

In theory, there is a continued evolution of the concept that must bring diseases and even other types of diseases has a developmental etiology, a topic that is rarely covered in ethics courses. From genetic discoveries that clearly demonstrate early risk to epidemiology of large populations that show inter-generational impact of environmental factors that increase risk for mental and neurological disease, there are whole institutes now at NIH that have been

dedicating substantial resources towards the goal of determining the origins of developmental risk. And then translating these research findings into prevention strategies in our community. The paper that appeared last week on brain growth and SES, et cetera, and so we're seeing more and more of this appear.

In 2007, I co-wrote an NIMH Council report with my partner, child psychiatrist John Marsh. The report articulated short and long-term goals to expand research efforts on the neural development of basis of mental illness. Keep in mind that this developmental emphasis was espoused in the 1970s, but represented an occasional, as I call, a faucet drip of research going on compared to the current tidal wave of neuroscience research in this area now. Of course, I may be biased, because that's what I do. I'm a developmental neuroscientist. I think the emphasis is well placed in terms of important research investment and its potential impact on society.

However, have we considered sufficiently the ethics of the short and long-term impact of neuroscience research in critical periods, for example? In defining the future of individuals who have experienced a type of toxic stress, neglect, abuse, violence, parental substance abuse, poverty, that dramatically increases risk for later mental and physical illness. Are we in a position to speak about the irreversibility of a developmental recipe for mental and neurological disease that has a developmental etiology?

From my own unofficial survey of neuroscience graduate programs, most standard ethics courses don't touch upon this topic, yet the public service providers and policy makers want to know. I spent a fair amount of time talking about the science of child and brain development, and what these groups what to know is the following: What is the future for children who have experienced the heritable and non-heritable factors that increase risk for these disorders?

Conversation about stress responses has been termed permanently the so-called permanence of epigenetics, critical periods which implies a time domain that has generated the metaphor "windows of opportunity." All are conceptual frameworks for our neuroscience research, but how do we address how data that emerge from these studies are interpreted and used by policy makers and society?

Finally, in 33 seconds, I want to emphasize the gap between what we know and what we do, it still exists. Families, policy makers, service providers are desperate for information from science that will help them make decisions that will improve the relatives and the natives who live in their communities.

Recently I was at Bowdoin College as part of a two-day symposium on policy implications in child and brain development in Maine. So I met with 25 brilliant undergraduates, and we talked about what I do and how a basic scientist got into policy work as part of my role

as scientific director for the National Scientific Council on the Developing Child. I explained that when I speak to policy makers and legislators and business leaders, I talk about science, current misconceptions to the dominant cultural frames, and science policy gaps. But I stop short of recommending specific policy solutions because I don't do research on interventions in policy.

And I can tell you that it frustrated a lot of students. What do you? They didn't really understand what I did. How do you -- if they don't pass legislation or do something, how do you know that you've been successful? And I explained, you know, I'm not a policy maker. But I've heard well-intentioned neuroscientists provide recommendations or promote activities or programs that they believe would be best for the community to incorporate.

We're smart, neuroscientists are smart, if I might say so myself. But we are really -- are we really equipped to make decisions between four intervention programs unless we're intervention researchers? And is it ethical for us to promote ourselves as decision makers in domains that all logic would state that we have no business being in?

So it's about training the current and next generation to recognize that they are participating in building a bridge too far, which is an issue of personal and disciplinary ethics, and when they are being true to both the promise and limitations of the science.

Thank you.

DR. GUTMANN: Thank you very much.

Reminding us that Socrates said the highest form of knowledge is knowing what you don't know. But you also, without rehearsing, the three speakers have taken us from understanding that teaching isn't the same as learning, and for scientists to learn, both cognitively and emotionally, the importance of ethics is important. And now Dr. Levitt, it's important not only for scientists to learn, but what the larger community learns and scientists -- and we all have a responsibility for communicating in a way that we understand what people actually hear.

So thank you very much.

Our final speaker, our wrap-up hitter, is Dr. Paul Wolpe, the Asa Griggs Candler professor of bioethics, the Raymond Schinaze distinguished research professor of Jewish bioethics, the professor of medicine, pediatrics, psychiatry, neuroscience and biological behavior and sociology. We certainly, in our institutions, don't go for short titles, as you can see. And im not finished right, the director of the Center for Ethics at Emory University. Now that's brief.

He also serves as a senior bioethicist at the National Aeronautics and Space Administration. Dr. Wolpe is the author of over 125 articles, editorials, book chapters in sociology, medicine and bioethics. And contributed to a variety of encyclopedias on bioethical

issues. He's editor in chief of the American Journal of Bioethics, Neuroscience, a past president of the American Society for Bioethics and Humanities, Fellow of the Hastings Center and a Fellow of the College of Physicians of Philadelphia.

Welcome, Paul.

DR. WOLPE: Thanks, very much.

The danger of going last may be redundancy, but I hope that some of the redundancy is seen as reinforcing valuable ideas that other people have articulated that are very important. I believe there is no topic more importannt to think about right now, educate ourselves about, and in some areas control actually through legislation, the emerging powerful technologies of neuroscience.

I'm a sociologist trained in social psychiatry, and so I always have an interest in the functioning of the brain. But I started my career working on genetics and had a kind of epiphany when I realized that the things that we were so concerned about in genetics, genetic privacy, human enhancement, misuse of technology were decades away in genetics, at least in human beings. We're not going to be enhancing human beings genetically any time soon. But were happening already in neuroscience.

We're already enhancing ourselves neurologically in a variety of ways and technologies are coming down the pike that will give us humans power in that area long before we'll be able to genetically enhance human beings.

Well, if you take the example of genetic privacy, we've worried a lot about genetic privacy over the years and have legislation in genetic privacy. But the real bottom line in genetic privacy is, first of all, I didn't really care that much if you have my genome, there isn't really that much about me that you'll be able to tell. It's also possible that I could have an identical twin that shares my genome, so it isn't the genome that most defines who I am. The sense of my self, my memories, my personality, my personal quirks, the things that my friends and family and colleagues think of as who I am resides in my brain, not in my genome, primarily.

And so I think that when we begin enterprise of trying to understand and diagnose the function of that particular organ. It is crucially important that we think about it deeper, and that's why I moved my professional attention from genetics to neuroscience and tried to help found this field of neuroethics.

It's now been about a decade since the establishment of that field. It's growing tremendously, we have an interactional society, we have two major journals, we have all kinds of meetings now, but we've only really begun to scratch the surface. Most importantly, there's no

ELSI project for neuroscience, no pot of millions of dollars to explore neuroscience research ethics, neuroscience social impact, or any of the other things that this powerful technology is about to release to us, on us and for us.

No such funding has been seriously discussed, as far as I know, for the Human BRAIN Initiatives or in the 90s with the proclamation of the decade of the brain. And many of the federal agencies have since dedicated a large amount of money to that pursuit.

So the first recommendation is that this committee recommend the reprioritization of the NIH, NSF and other federal agencies to take seriously the need to examine the ethics of neuroscience. It is at least as important, and I believe right now in some ways more important, than the work of neuroscience and genetics, something I am very dedicated to. So I'm not in any sense trying to minimize the importance of that, it’s a matter of balance.

There's no doubt in my mind that neuroscience policies, the next great challenge of technological policy in the United States. Technologies are coming down the pike that are challenging in the sense of agency, privacy, they'll present and already presenting major conundrums in law and jurisprudence. And in fact, one of the only really major funding of a project in the United States around neuroethics has been the McArthur Project around law and neuroscience.

It will give intelligence and security agencies new tools that can be easily abused, it will increase the public's concern and, just an editorial comment here, anybody who is in this field will know that my inbox is filled with emails from people around the country who are absolutely certain that they are being surveilled, that the FBI is beaming microwaves into their brain, that their brains have been manipulated.

And the truth of that aside, it is a powerful part of the public's consciousness and belief system that, as soon as security agencies and others are able to do these kinds of things, to have this kind of power, that it would be immediately released on the American public. And I think that part of the reason that that is so strong with the public is because we haven't done a good job. I am constantly telling people, we can't do these things yet.

But what about when we can? What do you say to them then?

So there's not only the appropriate conversation, there's also the concern and in some cases paranoia, I think, around these things that we need to address. There are also those questions about state sponsored intrusion in the public light in other areas right now in our country. It will fascinate the military, it already has, they've been given a handful of lie detection devices to troops in Iraq and Afghanistan. And in their attempt to build technological fighters will present new questions of the boundary between privacy and commercial activity in stores, and on and on and on.

So first of all, I want to say something about that challenge in relation to neuroscience education. So we teach responsible conduct of research as we've said, but there's a more powerful area, and Mildred brought this up, in the exploration of what the impact is of science that people do on the greater society. Now we are writing the book Responsible science at the National Academies of Science, I'm on that committee, and any of the things we do differently from the 1992 version is talking about science aspirationally, not just regulatorily. But that the new book should say something about what science should be, what science should aspire to.

We need to encourage scientists to be part of that larger conversation. I think in all graduate science programs, not just neuroscience, we have to challenge our students to confront big questions. There are certain questions mentors and educators of science need to revisit periodically and seriously with their charges. What is science for? What are the values I bring to my scientific work? Why did I become a scientist and why am I one now? What are the moral motivations and inclinations and principles at the heart of my scientific pursuit? How do I advance the cause of scientific progress in a way that's helpful and productive? Whom does my research serve? What are the potential impacts of my science on my field and on society as a whole? And how can my science be misused, and how can I be an advocate for its correct use?

The Dutch and European science foundations are both adopting a so-called social responsible innovation research framework. Under which researchers are expected to think about and forecast what sort of ethical issues their science and/or technology might create. They're also charged to build an ethical dimension into their projects, to explore ideas about how to ensure that those ethical challenges are properly accommodated. Socially responsible innovation programs should be part of the recommendations of this committee, part of President Obama's BRAIN Initiative, and in fact, part of the entire science initiative of our country.

There are models of this type of program in other countries that we could look to to find best practice and innovative ideas for assuring the highest level of thinking as we develop these powerful ideas and technologies.

Neuroscience has been experimenting in areas that have profound ethical implications. Brain imaging researchers are inching closer and closer to the ability to apprehend subjective thoughts, something we've written a lot about. For all of human history, without exception, any information we would have got from another human being we got through the peripheral nervous system, whether it was expression or language or galvanic skin response, whatever it was. And now for the first time in human history, we can get information directly from the central nervous system, directly from the brain. This is a powerful technology.

I keep thinking we're going to hit a wall and be able to go no further, and neuroscience keeps pushing through that wall to the point where now we can actually look into people's brains and know what word they're thinking of, what card they've chosen, what their

intention is in the next few minutes. We are also developing lie detection brain imaging.

AUDIENCE PARTICIPANT: Excuse me, I have to say something.

I think that is playing God and I disagree with that. That is wrong. You cannot know what somebody is thinking and you cannot know somebody's intention. That is playing God.

DR. WOLPE: In another area of neuroscience, scientists have for the first time now linked two brains together, two rat brains first and then actually a human and a rat brain, so that the activity of one brain affected the ability to behave in the knowledge of the second brain.

Now these are very preliminary experiments. They're not experiments that are going to be affecting the general population any time soon in any profound way. But now is the time to think about the implications, not once these kinds of technologies have reached the kind of power that they can be of concern to us in the general population.

Something of a brain-to-brain interface have open up whole new areas of possibility in communication and we now have some areas of even remote brain communication that may be able to be used coercively. I could go into other examples.

But the point here, I think, is obvious, and that is these are issues that are not only important for neuroscientists to think about, they're for all of us to think about. And for that reason, I think the participation of neuroscientists in this conversation is crucial. They bring now only expertise, but they also bring a note of sanity as people talk about nightmare scenarios and science fiction scenarios, and I confront that everywhere I go. And every time I talk about these issues, people are concerned about these fantasies that they have about where this technology can go and the power of it. But those fantasies are fueled by real research and by real things that are happening now in the laboratory.

So to end, I think that there are three or four things that this Commission could recommend that are crucially important. The first is that reprioritization of federal agencies so that money can be spent in the ways for the human genome projects, ELSI project, so that we can begin to think about these things more systematically.

Secondly, encouraging a different orientation, as Mildred talked about, as all of us talked about, of graduate students and even of undergraduate and even pre-collegiate education around issues of neuroscience because they are increasingly important.

And then the final one is to encourage scientists to speak publicly about these topics because they are the ones who not only have the expertise, but have a right to advocate for the science itself, for the goals of the science itself, and for saying public policy around neuroscience. I think it's very important that we begin that process before neuroscience reaches a

level of do-it-yourself. And by the way, there's a fair are now do-it-yourself technologies that people are using in their own homes now to try to manipulate their own brain processes. So these are things that need to be talked about now.

Thanks.

DR. GUTMANN: Thank you. Thank you all.

I'm going to open up for the Commission for questions. But let me take the Chair's prerogative and begin with a question that, I was really struck by the point of, I think, very constructive disagreement between Mildred and Steve. Which is, Mildred, you said -- and I suspect it's not -- you don't actually disagree on this, but I have a question attached to it.

You said that the definition of a professional is that a professional understands that he or she has a calling, that has this goal that that professional expertise is seeking. And I think it's important for us to understand that there's an important social institutional construct based on that conception, which is professionals have significant autonomy in our society. Peer review is based on that notion of professionals understanding that they have a calling and they will rise to that calling and expect their colleagues to.

But most professionals have never been required to systematically examine their calling. And I think that's what Steve Hyman was referring to. Our education has become sufficiently specialized at the professional level, even though, to the credit of this country, the broad liberal arts and sciences model of undergraduate education is still the dominant model that we admire and try -- those of us who are leaders try to continue to progress on.

So my question is, given that it's a fact that most professional scientists have never been required, and are not required generally speaking to systematically examine their calling and the goal of what they're doing, how important is it in neuroscience, given all of the issues that it raises about human being, and raises for the sake of good science, how important is it for us as a Commission to think about recommendations for how that examination of the calling of neuroscience might go forward?

Mildred, do you want to -- I would actually like -- I'd like Pat and Steve also to weigh in. And Paul, yes.

DR. SOLOMON: Should I respond?

DR. GUTMANN: Yes, please.

DR. SOLOMON: The reason I use the term transformational learning, which I agree it is very aspirational, is that I'm trying to argue that we should -- you know, making an assertion, an aspirational assertion that we should be thinking about this for scientists. The

traditional professions, like physicians and nurses, have created some groups who are physician ethicists, who are nurse ethicists, and in a sense I'm calling for an opportunity to be a scientist ethicists.

And another aspect of what it means to be professional is not only the autonomy that we grant, but that there's an inherent fiduciary responsibility. And that part of what it means to be properly educated in a profession is to think about who one's constituencies are, to whom one owes obligations. I don't think we've really engaged with that for scientists. And I think that Paul Wolpe's comments speak directly to that. The large set of very reflective and important questions that he had at the end of his presentation I think are critical.

So I would argue that the Commission think about the ways in which science preparation really can engage scientists in thinking about the larger social purposes of their work.

DR. GUTMANN: Pat?

DR. LEVITT: Yeah, I agree with the last -- I mean, there's no expectation in the current training of scientists that there's an obligation from a professional perspective, of understanding, for them to reflect on the meaning of what they're doing. And then the responsibility of knowledge transfer.

The knowledge transfer training is to each other. That's where we -- so what do we do? We have communication courses that basically develop skill sets in talking to other scientists, other professionals. We have literally no skill training in knowledge transfer to the public or to policy makers or to anybody else who are in decision making capacities and are trying to use best -- trying to use research to develop best practices.

And I think it's traditionally thought that our obligation is to do the very best science that we can do, put it out there, and however it might be used by the public, it will be used by the public.

On the other hand, we have to be a little bit self-reflecting in recognizing now that we are participating in a process in which we actually enjoy being in the public eye, right? That if you look at the public coverage of science now, compared to the way it was a generation ago, it was almost absent from public newspapers and other things. It's changed completely and we've promoted that.

DR. GUTMANN: Yeah.

DR. LEVITT: And so if we're going to do that, then we have to recognize that there comes with that an ethical responsibility to understand how to participate in that process.

So as I said, it's not about simpler words or shorter sentences, and say, well, I say

it simply, if they misinterpret it, it's their problem. I think that's part of the self-reflection process of what our professional responsibilities are. It's also part of the educational process of providing us, our field, with the tools to be able to do that. It's not natural to do it. We're trained to be expert and we're trained to be narrow, and we're trained to be sophisticated in how we communicate. And that has to be adjusted.

DR. GUTMANN: Yeah. It's interesting that you use the word "trained" rather than "educated."

(Laughter.)

DR. GUTMANN: No, I -- there is a public -- science is a public trust. And even though it's not directly accountable publicly, it is a public trust. And the funding that goes to it is predicated on that idea.

DR. LEVITT: I use the word "training" because, you know, I was -- I've watched eminent scientists go through -- I watched Jim Heckman, who is very much into this area of early investment, you know, in issues around children who's trained to speak about this to the public in a way that will resonate with the work that he's trying to do.

DR. GUTMANN: Yes.

DR. LEVITT: For me, it's -- I'm training somebody in my laboratory to do a certain kind of molecular biology experiment; that's training.

DR. GUTMANN: Yes.

DR. LEVITT: There is an education component to it, but it's training. And I think we have to be trained on how to communicate in ways that, in the context of understanding cultural frames, understanding the use of language, things that we just don't get exposed to at all as neuroscientists.

DR. GUTMANN: Yeah.

DR. HYMAN: I'd like to focus on something a little bit different from that, not at all in disagreement but something different. Which is the directive between intellectual freedom and appropriate attention to areas of ethical and policy concern.

The -- maybe one, the shorthand we can imagine, that scientists do feel they have a responsibility, and they might imagine it as a romanticized version of Galileo, right, who was actually a much socially-aware person than those few. But you know, pursuing discovery wherever it goes, despite censoring social forces and now, you know, we look back and say, well, how terrible that the church was censoring science. But to contemporaries, you know, he

was potentially upsetting human self-understanding and the social order. And we see it as appropriate, indeed, heroic that he did so.

And I think basic scientists, I think it would be a terrible mistake to tell basic scientists that they -- and I don't think anyone's said this directly, but I think there's an implication -- should be in some way fettered because we can worry about all kinds of negative uses, but we just never know where science is going ultimately. And we may fail then to discover something that would be critically helpful to a person with autism or schizophrenia or Alzheimer's disease or any of the other panoply of horrors that we don't yet know how to address well.

I think that what we're -- what I'm looking for, at least when I teach, and we'll see where the Commission comes out on this -- is not a kind of ethical education, which dampens scientific curiosity and frankly bold exploration, but rather makes the scientists reflective. So that when they've discovered something, or perhaps more often when they're engaged in something that applied, some sort of technology, they understand that it does have implications and they understand that perhaps they should pause, perhaps they should raise these concerns with appropriate professional bodies or others.

The one thing we don't have is a place for people to express their concerns, their NIH project officer or NSF project officer might not quite fit it. But I think that the healthy -- ultimately, if the goal is the healthiest outcome for society, understanding that there are always going to be risks of misusing any technology, we want to, on the one hand, make sure that scientists feel able to explore without unnecessary intellectual fetters. But by the same token, become deeply aware about the implications of their research and initiate the right kind of conversations, especially when it comes to applied technologies.

DR. GUTMANN: It's been --

DR. HYMAN: It's been going on --

DR. GUTMANN: Steve, let me --

DR. HYMAN: -- (inaudible) -- but I always feel that, while I'm -- I very much want to argue for ethics, I don't want to argue for narrowing the scope of scientific investigation.

I think the other thing --

DR. GUTMANN: Steve, can you stop? Stop, stop, stop, please. Okay. Because we can't make eye contact here.

DR. HYMAN: Yeah, no problem.

So then the only other thing very quickly, I think we do a miserable job in education. We don't actually exploit the wonderful four-year often liberal arts, not always liberal arts educations that produce our scientists, but also physicians and nurses. And learning ethics late as a required course I think is not effective. I implied this in my opening remarks. I think something like ethical instruction requires a building process, and it really should start in undergraduate life. And in fact, if you think about the required premedical curriculum for --

DR. GUTMANN: Okay, Steve, I've got tell you, please -- I have to ask Paul to speak. We're just on a time schedule.

DR. HYMAN: All right. I'm sorry, I'll just say, once I've been --

DR. GUTMANN: No, no, you won't. Stop please. Please.

DR. HYMAN: -- premedical requirements --

DR. GUTMANN: Okay, thank you.

One thing that Steve said that I think is real -- I mean, everything was excellent. But one thing that I really want to underline is we really -- and we've said this before as the Commission, and I think we really have to drive it home. You can't begin ethics education at the professional level. It has to begin at the undergraduate level.

And nothing that we will say as a Commission says, let alone implies a narrowing of scientific freedom and creativity -- and it is interesting that raising the issue of science as a public trust raises that scare in scientists' minds, rather than understands this as what Steve and Mildred said earlier. That science is a -- it's a professional and a public calling.

And in order to understand it as a calling, and I feel that it's really important for us as a Commission to make it clear, there has to be a knowledge base to it. The same way that there has to be a knowledge base in the expertise of working in the lab, there has to be a knowledge base in the expertise of what are the ethical parameters of good science.

And that's what we would like, as a Commission, to drive forward with the principle that we have underlined and added in our -- you know, to the Belmont principles of intellectual freedom. Very important, extremely important principle of a free society. And it's raised some hackles when Vice President Biden went to China and spoke about how important intellectual freedom is in the engine of creativity, of -- on the economic engine and the engine of creativity of our country. So it's very helpful.

Paul? Briefly, thank you. I have a list of commissioners.

DR. WOLPE: Well, there are three legs of responsibility that scientists need to

assume. The first is ethical responsibility. The first is ethical responsibility for the integrity for their own research, and also for their colleagues. Their local responsibility among institutions.

The second level is a responsibility to their disciplines, and for the oversight, promotion of collective activity for their fields of expertise, they have a responsibility to shepherd that.

And the third is to recognize their social responsibility to science as a public enterprise. So I think we need to impress upon scientists the need to think about all three of those. We believe in the first, less good at the second and poor at the third.

DR. GUTMANN: Thank you.

Jim?

DR. WAGNER: Very quickly, I hope. Paul and Steve, I heard your comments sort of on two ends of a scale. And I want to suggest a clarification and ask your response.

Part of you talk about the concern that we had better be thinking, begin thinking about, I think was your phrase, possible abuses. And then Steve, you used the word fettered multiple times. That gee, if we would get our -- implying if we get our thinking ahead of the application of the meaning, that we could find ourselves constrained. And I'm so glad that Amy brought up this notion of intellectual freedom.

But do we need to be concerned about -- I don't believe that we need to be concerned about thinking about the future on some of the fundamental principles that already exist, principles that we should espouse and remind one another of. And principles of respectful persons, the need for -- the appropriateness of privacy and of informed consent. Rather than imagine we're helpless about the future or we fear being fettered about future science, don't we actually set ourselves free from the fetters and actually address the future by being explicit about some of these very, very important ethical principles?

I open it to all of you, but Paul?

DR. WOLPE: I'll just say briefly, the devil is in the details in ethics. It's not so much the question of informed consent or approaching one of these. How you apply it to any particular technology, any particular place. You know, the issue for scientists, for me, is not so much fettering them in any way, but just having them be creative and imaginative about their own work, and try to project into the future the exact way that the socially responsible innovation project is trying to do in Europe to get someone to see about how might this be applied? How might it be applied roundly, not to stop doing it or to fetter, but to think about advocating for correct use. So think about potential measures, to think about other things that could mitigate any misuse. I think that's where the real issue is.

DR. WAGNER: There's your aspiration was, also.

DR. HYMAN: I agree entirely with Paul. And I'm glad you raised this question.

DR. GUTMANN: Thank you, Steve.

I'm going to ask other Commissioners to get their questions out and maybe we'll take a few questions and then ask you to choose how you want to answer them. Because I have Dan, John and Barbara on the list, and Christine. So I'll take two at a time. Dan and John will be first, and then Barbara and Christine.

DR. SULMASY: Thanks for all of your comments, they're very helpful.

All of us agree, obviously, that education is good. But doing it effectively is the hard work. And I want to just go through a list of some of the barriers I've kind of heard, instead of -- and talk about those.

You know, the first one we heard a little about is this question of whether it is the job of scientists to reflect on the ends of their work or not, whether it is a profession, et cetera.

Second, you know, we've heard the scientists are in the best position to restrain hyperbole, but we also know there are lots of incentives for scientists to engage in hyperbole, for grant money and publicity, et cetera. Scientists and ethicists need to work together, but we also hear that scientists are perceived as the naysayers, the people who -- the ethicists. The ethicists are the naysayers who will restrain the freedom of scientists.

Science is multi-national, so we need a multi-national ethic, but then we go -- it becomes across the questions of cultural relativism and moral standing. To criticize one culture versus another. We need to educate not just for knowledge but also for virtue, how do we do that? We've heard about a hidden curriculum in science, as well as there is for clinicians, and the structure of learning so that what we are -- you know, what we typically do is have an online course for scientists, which actually makes them hate ethics.

So with all of that, I mean, it's all great, but I want to know how we get, you know, concrete answers that we're not just going to, you know, put something on paper and not have any effectiveness.

DR. GUTMANN: John -- concrete answers, which would be great, that's what we are going to be working on. So keep that in mind.

John?

DR. ARRAS: Okay, thanks.

So -- and thank all of you for a really stimulating panel.

So I want to ask a very pragmatic tactical question about pedagogy that follows up on Dan's so many questions. And this is drawn from experience teaching bioethics in a major medical school in New York City which eventually drove me back into teaching undergraduates.

So on the one hand, we have large classes that encompass, say, the entire first-year class of medical students. The good thing about this was that it reached everybody, right? The bad thing about it was that everybody hated it, right? Both the students and the faculty. You know, it just was not an appropriate window of opportunity for real learning. It was viewed as a hindrance and a distraction from the real stuff we were supposed to be learning.

On the other hand, some of the best educational experiences I've ever participated in were these very small groups of interns, residents, nurses, social workers, neonatology, geriatrics, meeting biweekly and talking about cases. Where students left those sessions realizing that, you know, that asking ethical questions is just a part of what we do, it's a part of who we are. It's a part of the expert handling of a case.

The trouble with this method is that it's -- with the retail method is that it's very hit-and-miss, not everybody got the opportunity to be there. So I'm asking you for -- to think outside the box here. You know, for me the answer was just to go back to teaching undergraduates, okay. They actually write papers for you. But you know, are there any emerging methods or best practices for people thinking outside the box here in trying to really engage with people in a meaningful way that reaches everybody, you know, at the same time?

DR. GUTMANN: Mildred?

DR. SOLOMON: So to Dan, no one ever said it would be easy.

DR. GUTMANN: Yeah.

DR. SOLOMON: And there's even more problems. I had the privilege of leading a program from the NIH, Amy mentioned it in the introduction, in which we had the hard challenge of developing strategies for teaching bioethics to tenth grade biology students through tenth grade biology teachers who had no training in bioethics. And we spent a lot of time thinking about, is this worth doing? Can we do it in a high-quality way? At the end of that process, and we don't have time to go into all the strategies we used, but we now have a resource that, within the first half year of its release, 12,000 teachers across the country asked for. And I have no idea how many are now using it.

One of the findings from that pilot study was not fettering science, not creating fetters, many of the teachers in the pilot test we did in 50 sites across the country said that this awakened students to science who had felt uninterested in it. And the pedagogical reason I think

that happened is it wasn't training, it wasn't education. It was what actually Steve said, it was creating curiosity. Putting in front of them real questions that they actually realized, I don't know what the right thing is to do. And that created an impulse to figure out, what do I think about this?

At the Hastings Center, we're taking the next step, which is that we've designed a program for secondary school students. So the first one that we did, that was done with the NIH was really focusing on curriculum development for teachers to use in their classrooms. What we're doing at the Hastings Center right now that is really moving is a research, we're engaging high school students in doing their own research in bioethics, with mentoring, with scaffolding. They are helped to identify a real question right now that our country doesn't have a good way of handling, and they're responsible for developing their own arguments and presenting them in community settings to their parents, to community members.

We have an example of this in Summit, New Jersey where five hundred people came together to hear ten high school students talk about a range of bioethics issues. So I'm not advocating courses, necessarily. I think it has to be multi -- anything that's big and difficult is going to require work in many different ways through a multi-factorial strategy.

DR. HYMAN: I agree entirely with that. I think the more we worry about everybody having -- and I know this wasn't really the question. But a uniform exposure which leads to bureaucratization and I think we are the risk of truly deeply educating no one.

Henry Ford wouldn't have been very good at designing effective pedagogy, and I think some of the growing pains with most of the online courses also reflect the problem of commoditization. I think that what we just heard about beginning it in many levels, and how it had many different paths, given a much greater -- you know, we have to study these things, figure out what works, how we match people.

But the key is really engagement. And I just -- I don't think engagement is likely in these large uniform settings. And indeed, when I talked about bettering science, it really -- the concerns in the community, and they're real and they're frequent come again, not from the Belmont principles but through their bureaucratization through certain ways in which I think we all know, you know, IRBs and IACUCs can be terrific and constructive and helpful, which doesn't mean letting people get away with things. Or they can become bureaucratized, and I think the same is true in education.

DR. GUTMANN: So on -- let me just say something that I've written on and given talks on that is out of the box, but in a way that will be familiar.

Professional ethics, which has now been integrated almost all of professional school, certainly in medical schools, was created to be taught at the graduate level in medical

schools, for example, medical ethics. I for the life of me cannot find a reason why only doctors should be taught about medical ethics, for example. And at Penn, now, you know, Zeke Emanuel is teaching medical ethics to undergraduates as well as to graduate students, and it could be the same course. It's a very challenging course. And others are doing it as well.

To integrate scientific ethics and professional ethics at the undergraduate level -- so John, I don't think you made a mistake, it's --

DR. ARRAS: Oh, no. No.

DR. GUTMANN: And it's not to take it away from teaching it at the professional level, but it's going to be more effective, more stimulating if it's taught as a under --- you know, to undergraduates who are less jaded than, you know, professionals are when they get into professional school.

So I think, just to be specific about, you know, neuroethics, given -- and the ethics of neuroscience, given the intellectually challenging issues it raises, it is a prime candidate for finding ways of introducing our undergraduates to these issues, and having them reflect on them before they get into the more specialist, more pressure-cooker atmosphere of professional training.

Pat?

DR. LEVITT: I'd say one other -- I think people have touched upon this in a wonderful way, and the word integration has been used a lot. And I think that's really the path forward. So I work with some people in the Rossier school, which is the School of Education at USC, developed a neuroscience curriculum for K through twelve. And part of the curriculum, not separate, not an appendix, are issues that revolve around ethics in neuroscience, and sometimes the students don't even know that that's what they're dealing with.

In graduate school, now what we're doing, so you know, every graduate student engages in some journal club of some sort in their discipline. And I've had meetings with faculty about doing this, where there's a component of the discussion, not just about the quality of the science or the scientific implications, because that's where you spend most of your time. We have an original research article, you sit around and discuss it. There are ethical considerations to discuss the meaning of this, how will this be translated? How could you translate this in a way that would be meaningful.

So if it's an appendix, if it continues to be an appendix, oh, I have to take this two-credit course in ethics, I have to take this two-credit course, it's seen as a separate universe as opposed to an integrated universe within neuroscience.

DR. GUTMANN: Yeah.

DR. LEVITT: That's, to me, the difference.

DR. GUTMANN: It's also the difference between distribution requirements and a required course. That is, you shouldn't be able to get through as a professional without having been exposed to ethics. But that doesn't mean that there ought to be a required course.

DR. LEVITT: Well, you know, in some ways, you know, there's a bit of cross purposes. We're all talking about the same thing in terms of integration. But when you have a training program funded by the NIH, for example, they want to see, you know, one of the bullets, right, as a separate segregated component of training, as opposed to an integrated component of training.

I've talked to the program officer about, how would you feel if we had this integrated throughout our core curriculum? And they get nervous, because how do we know it's going to -- right? So they're emphasizing, in fact, the segregation when, in fact, it should be --

DR. GUTMANN: Yeah. So that's an important point.

Let me just make another point on this, and then I -- Paul, are you going to hold -- no, you're going to hold, because I -- you'll have a chance. We'll have -- but if there isn't funding for ethics in neuroscience, it's not going to be done well, let alone -- it may be done, but it won't be done well.

Christine and Barbara, Barbara first then Christine.

DR. ATKINSON: I wanted to go back to the public discourse piece, and particularly as it relates to getting to the policy level. Both Pat and you, Paul, talked about the role of neuroscientists in leading a public discussion and with a little bit of how far a neuroscientist could go from the Pat side.

I'm interested, if we were going to propose a structure or a program or a forum or some way to get that public discourse to lead the policy, if you have any ideas about who should be at the table, who should be doing it, what kind of program or whatever could be done that would help actually get the discourse to happen and to get it to move to policy?

DR. GRADY: So thank you all for wonderful presentations. But I -- my question I think kind of builds on Dan's and John's a little bit.

I was thinking about -- I think that the integration of science and ethics makes sense. What I want to hear a little bit more about, is there any reason to privilege neuroscience over other sciences?

And then a very specific question actually for Millie, because you had this really

nice list of suggestions, one of which was a learning community. And I just want to hear, is that anything different than what has already been being discussed? Or did you have something specific in mind about that?

DR. GUTMANN: Let's begin with Paul, and then Mildred.

DR. WOLPE: A couple comments.

First of all, I taught a course of medical ethics at the University of Pennsylvania in 1992, so Zeke is following me. A long and storied history.

(Laughter.)

DR. GUTMANN: No egos, you know in this field.

DR. WOLPE: The Center for Ethics at Penn was very active in undergraduate education. But the problem is that that was a course. Zeke's course is a course. And the courses are great, it touches a very small number of undergraduates. One of the things we're doing is we've applied for a grant to try to create a innovative way to integrate ethics education throughout the undergraduate curriculum so that it touches every student in almost every course. And especially in their extra and co-curricular life and their dormitory life. Because you can't -- you can't have ethics overlaid.

We learned that in medical school, in around 2000, give or take a few years on either side, medical schools around the country redesigned their curriculum. The one at Penn that I was involved with was called Curriculum 2000. And part of that redesign broadly taught in medical schools was to try to figure out how to put humans and professionals and ethics into the medical curriculum in a way that was integrated rather than overlaid. The problem that you were talking about, John, is that you can't come over to the medical course and not modeled in the clinics and not model them right.

So the real challenge is not how to create better courses but rather how to create, you know, experiential moments taken away from the experience of being a physician or scientist or undergraduate. But it is how to integrate ethical conversations in the very fabric of the way in which the university operations. And that's the real challenge that we have, and that's what ethicists should be trying to figure out how to do.

And that absolutely means having buy-in by faculty, and then everybody from, you know, psychology and sociology and geology and basket weaving to say, at some point or another, I need to ask the question, is there an ethical component to what I'm talking about in ways that, as a learning moment for everyone so that it becomes part of the fabric of what we do.

And then one last comment to -- about public discourse. Scientists in whatever

field need to be part of public discourse. They don't necessarily need to lead the public discourse. My concern has been that their voices have been too absent from the public discourse because it has been an ethic in science that those scientists who are -- represent the sciences in public discourse are considered to be -- there's been a certain amount of approbation about that. I mean, the real public scientists aren't real scientists.

And I think what we need to tell scientists is that it's an absolute responsibility of being a scientist, to be part of a public discourse, and to be engaged in the public discourse. And to write op-eds, not only as a explainers of their science but as advocates of their science. Their citizens have every right to be advocates as everyone else has.

DR. HYMAN: Can I just answer this? I find the term, you know, that neuroethics should be privileged -- privileged is really a pejorative term. They're just different. When there are no ethics, maybe it's really a bad moniker for a very different field. Because what's really different and interesting are issues of memory, identity, moral agency, and even the origins of morality, of a degree to which they're evolutionarily or morally dictated. So it's a very different field.

And what I would say is that there's a particular -- of course it's coming up now because of the brain project, which will actually touch on all these things that make us peculiarly human. I think what -- I think what's useful about them is that these are -- that much of traditional bioethics is, even though people disagree, many of the rubrics are intellectually far more settled than those having to do with -- now of identity and so forth, are still very much in flux, partly because keep learning new things empirically. And so think that it gives us a real opportunity to engage at least a subset of students in a very active and challenging and living area of -- that is pushing the boundaries of ethics.

So I just think it's a different set of problems.

DR. GUTMANN: Thanks, Steve.

Pat?

DR. LEVITT: So I'm going to -- in terms of answering Barbara's question, so I think we'll disagree a little bit. I think that the public and policy makers respond well -- this is based on my own experience, so it's -- and the experience of other people around the council, have gone around to lots of different states. I've been in a lot of states talking to a lot of policy makers over the last ten years.

One of the issues that they -- that the public gets nervous about, in fact -- and policy makers, is advocacy. If you're advocating for what you're doing, there is the sense that it's not -- that there's some skewing or some component of what you're presenting that has a specific

goal of extracting something from the public or from policy makers for advancing whatever it is that you want to do.

As opposed to -- and maybe it seems silly to contrast this, but being the knowledge transfer agent, so I'm on a board in Canada for a Canadian-wide network, research network on brain and child development. There is a core that speaks of this issue of knowledge transfer to the public and policy makers. There's also an ethics core as well, it's integrated in every project. And it's not about advocating for a particular position or particular -- it's about transferring knowledge in a way that resonates.

And believe it or not, people who are not trained in science can understand science. There is this sense the scientists had, including me, I've had this for -- until I started to immerse myself -- you know, I've never been more nervous than going in front of a group of state senators in Kansas or Oklahoma to talk about this. But the reality is that people can understand science if scientists learn how to talk about it. Not advocate for, but to say, this is important for you to know. That's the advocacy part. And then, here is what we know, and here is what we don't know.

Advocacy is a bit of an issue, I think, in engaging the public.

DR. GUTMANN: So thank you all. Our time is more than up, alas. We're going to take a ten-minute break, but not before we thank -- you've really been tremendous communicators and very thoughtful. Mildred, Steve, Pat and Paul, thank you so very, very much.

(Applause.)

DR. GUTMANN: We'll readjourn in ten minutes.