twitter share facebook share 2019-09-17 1296
Martin Rees

Rapid advances in biotechnology demand additional regulations to keep experiments safe, control the spread of potentially dangerous knowledge, and police the ethics of how new learning is applied. But effective worldwide enforcement of such rules will be virtually impossible – and that is a potentially terrifying prospect.

Biomedical advances in recent decades have been hugely beneficial – most of all for the world’s poor, whose life expectancy has increased dramatically. But the future looks more dangerous. Although continued innovation will further improve people’s lives, it will also give rise to new threats, and sharpen some ethical dilemmas concerning human life itself.

For starters, some scientists are looking into ever more extreme ways of enabling people to live longer. But although we would almost certainly welcome an extended, healthy lifespan, many of us would not want to prolong matters once our quality of life or prognosis dipped below a certain threshold. We dread clinging on in the grip of, say, dementia, and being a drain on resources and the sympathy of others.

Medical progress is also blurring the transition between life and death. Today, death is normally taken to mean “brain death,” when all measurable signs of brain activity cease. But now there are proposals to restart the heart artificially after “brain death,” in order to keep transplantable organs “fresh” for longer.

Such a step would add to the moral ambiguity of transplant surgery. Already, for example, unscrupulous “agents” are persuading people in less developed countries to sell organs that will then be resold at a much higher price for the benefit of wealthy potential recipients.

These ambiguities, and the shortage of organ donors, will only increase. One priority, therefore, must be to make xenotransplantation – harvesting organs from pigs or other animals for human use – routine and safe. An even better option, although further off, could be 3D printing of replacement organs, using similar techniques to those currently being developed to make artificial meat.

Advances in microbiology may also prove to be a double-edged sword. True, better diagnostics, vaccines, and antibiotics should help to sustain health, control disease, and contain pandemics. But this very progress has sparked a dangerous evolutionary counter-attack by the pathogens themselves, with bacteria becoming immune to the antibiotics used to suppress them.

This growing resistance has already led to a resurgence in tuberculosis. Without new antibiotics, the risks posed by untreatable postoperative infections will rise back to where they were a century ago. Preventing the overuse of existing antibiotics – including in American cattle – and incentivizing the development of new treatments is thus an urgent short- and long-term priority.

And yet, there are also risks associated with the race to develop improved vaccines. In 2011, researchers in the Netherlands and the United States demonstrated that it was surprisingly simple to make the H5N1 influenza virus both more virulent and more transmissible. Some argued that staying a step ahead of natural mutations would make it easier to produce vaccines in short order. But critics of the experiments pointed to the increased risk of dangerous viruses being released unintentionally, or of bioterrorists gaining access to new techniques.

Rapid innovation in biotech demands that we explore regulations to keep experiments safe, control the spread of potentially dangerous knowledge, and police the ethics of how new techniques are being applied. But effective worldwide enforcement of such rules would be virtually impossible. If something can be done, then someone, somewhere, will do it. That is a potentially terrifying prospect.

Whereas producing a nuclear weapon requires elaborate special-purpose technology, biotech involves small-scale, dual-use equipment. In fact, biohacking is an increasingly popular hobby and competitive game. Because our world has become so interconnected, the magnitude of the worst potential bio-catastrophes is greater than ever. Yet far too many people are in denial about this.

Today, a natural pandemic would have a far greater social impact than in times past. Mid-fourteenth-century Europeans, for example, were understandably a fatalistic lot, and villages continued to function even when the Black Death killed half their inhabitants. But these days, the feeling of entitlement in many developed countries is so strong that social order would collapse as soon as a pandemic overwhelmed the health-care system.

Nor is it scaremongering to highlight the human risks of bio error or bio terror. After all, the spread of an artificially released pathogen can be neither predicted nor controlled. That fact inhibits the use of bioweapons by governments, or even by terrorist groups with specific aims. But an unbalanced loner with biotech expertise would not necessarily feel so constrained if he or she believed that there were too many humans on the planet.

Both bio error and bio terror are possible within the next ten to 15 years. And the risk will become even greater in the longer term once it becomes possible to design and synthesize viruses. The ultimate nightmare would be a highly lethal bioweapon that has the transmissibility of the common cold.

Yet perhaps the greatest dilemma concerns human beings themselves. At some point in the future, genetic modification and cyborg technologies could make humans mentally and physically malleable. Moreover, such evolution – a kind of secular “intelligent design” – would take only centuries, in contrast to the thousands of centuries needed for Darwinian evolution.1

That really would be a game changer. Today, when we admire the literature and artifacts that have survived from antiquity, we feel an affinity across thousands of years with those ancient artists and their civilizations. “Human nature” has not changed for millennia.

But there is no reason to assume that the dominant intelligences a few centuries from now will have any emotional resonance with us, even though they may have an algorithmic understanding of how we behaved. Will they even be recognizably human? Or, will electronic entities have taken over the world by then? It is anyone’s guess.

* Martin Rees, a cosmologist and astrophysicist, has been Britain’s Astronomer Royal since 1995.

messages.comments