Should We Be Playing God?
That question was posed to a group of students, staff, alumni and members of the neighboring community at a daylong class during Ursinus’s inaugural Minerva Term, a continuing education opportunity held over Alumni Weekend.
The course was co-taught by Rebecca Lyczak, a professor of biology, and Paul Stern, a professor of politics. They debated the topic for Ursinus Magazine.
Genome editing is a technique that a scientist could use to change a person’s DNA. But should it be used to prevent human embryos from inheriting a genetic disease? There is no easy answer.
To understand the ramifications, it’s important to first understand the process, called CRISPR. A scientist would insert into a fertilized embryo a guide RNA that matches the mutation she wants to fix, as well as a protein called Cas9. Working together, they find the mutation and cut the DNA. The scientist also adds a template piece of DNA with the correction for the mutation. The cell then uses that new DNA as a template to fix the DNA sequence.
If you were a parent worried that your child would inherit a genetic disease—like cystic fibrosis or sickle cell anemia, for example—this method could potentially eliminate it. But it’s not without controversy. Are you just making the change you say you’re making, or are other genes being changed that can get passed on to future generations? The DNA must be changed early on in a one-cell embryo before it begins to divide in order to ensure the entire embryo will have the corrected DNA sequence. This means any modifications could then be passed down. So even if the technology is proven safe, you’ve changed genetics for generations to come. Where do you draw the line for acceptable use?
No one thought it would be done in humans at this time, but we’ve arrived at that reality with the recent announcement that twin girls have been born after a scientist used CRISPR to edit one of their genes. This makes the need for discussion of acceptable use paramount. The scientific community has been discussing the issues and is trying to come up with ethical guidelines for genome editing in humans. One
possible application is to use it as a therapeutic treatment and apply it to a person who already has a genetic disorder, which would only impact the person being treated.
I do think there will be enough support, eventually, to use the technology to “cure” genetic diseases in embryos—as long as it is proven safe—with a focus at first on diseases that don’t have alternative treatments. A concern is, once it is approved for this use, will people want to edit their embryos for enhancements, like higher IQ or physical strength?
I’m in favor of using it as a treatment or therapy for existing genetic conditions, but as a scientist, I’m concerned about manipulating DNA in single cell embryos. As scientists, I believe there is nothing wrong in admitting that there are things we can’t understand, such as the long-term impacts of changing genomes over generations. It’s foolish to think that fixing a mutation now won’t have a greater impact in some way in generations to come.
When considering the politics and ethics of such new developments, it’s important to look back at the founders of modern science, authors such as Francis Bacon and René Descartes, to understand the arguments and the impetus for what we’re doing today.
These thinkers explicitly rejected the classical view that the goal of inquiry is, above all, to satisfy the human desire to know. Bacon and Descartes criticized this approach as fruitless, especially insofar as its focus was on endlessly controversial questions concerning the human good or the ultimate origin and purpose of the universe. Knowledge should instead be for the sake of utility; it should, in Bacon’s famous phrase, be directed to “the relief of man’s estate.” By this he meant that the goal of knowledge-seeking should be to address those needs that all humans have: to be free of pain and disease, to live more comfortably, and, most basically, to live longer.
These aspirations have shaped our world for several hundred years. But precisely because they have been so thoroughly fulfilled, our power to alter nature having grown exponentially, the question they pose has only become more acute: How should we use the vast power we have acquired? The events of the 20th century, when it became painfully clear that scientific progress need not go hand in hand with moral progress, have made this question even more urgent.
In some scientific circles, the feeling is simply to be bold and see what happens. Yet many express reservations about safety, while others make a deeper objection, one that persists even or especially once concerns about safety are answered: is it right to change ourselves in the proposed ways? Do we know what’s good for humans with sufficient clarity to make these judgments? Is it advisable to seek to be “better than human?” And it’s not at all clear that there’s a sufficiently bright line between therapy and enhancement to guide these decisions.
The more success we have in the lab, the more urgent and important become these debates. The National Academy of Science and the National Academy of Medicine have collaborated on recommendations for guidelines on how to use the technology in humans. But these guidelines suggest the problem. At one point, the goal of gene-editing is put in terms of “health.” But elsewhere the goal is said to be “human well-being.” It’s unclear where health ends and well-being takes over.
The guidelines suggest another thorny issue in their call for any regulations to be developed with a view to cultural differences. It would seem that such regulations, to be effective, would require global buy-in. Who rightly makes such a decision? Scientists should, no doubt, be involved. The decision can’t be theirs alone, if only because most scientists have no special training regarding these issues. But, finally, non-scientists must also have their say because it’s a decision that bears on all of us.