sexta-feira, 6 de março de 2020

Why we need worst-case thinking to prevent pandemics



The long read
Why we need worst-case thinking to prevent pandemics

Threats to humanity, and how we address them, define our time. Why are we still so complacent about facing up to existential risk? By Toby Ord

Fri 6 Mar 2020 06.00 GMT

The world is in the early stages of what may be the most deadly pandemic of the past 100 years. In China, thousands of people have already died; large outbreaks have begun in South Korea, Iran and Italy; and the rest of the world is bracing for impact. We do not yet know whether the final toll will be measured in thousands or hundreds of thousands. For all our advances in medicine, humanity remains much more vulnerable to pandemics than we would like to believe.

To understand our vulnerability, and to determine what steps must be taken to end it, it is useful to ask about the very worst-case scenarios. Just how bad could a pandemic be? In science fiction, we sometimes encounter the idea of a pandemic so severe that it could cause the end of civilisation, or even of humanity itself. Such a risk to humanity’s entire future is known as an existential risk. We can say with certainty that the novel coronavirus, named Covid-19, does not pose such a risk. But could the next pandemic? To find out, and to put the current outbreak into greater context, let us turn to the past.

In 1347, death came to Europe. It entered through the Crimean town of Caffa, brought by the besieging Mongol army. Fleeing merchants unwittingly carried it back to Italy. From there, it spread to France, Spain and England. Then up as far as Norway and across the rest of Europe – all the way to Moscow. Within six years, the Black Death had taken the continent.

Tens of millions fell gravely ill, their bodies succumbing to the disease in different ways. Some bore swollen buboes on their necks, armpits and thighs; some had their flesh turn black from haemorrhaging beneath the skin; some coughed blood from the necrotic inflammation of their throats and lungs. All forms involved fever, exhaustion and an intolerable stench from the material that exuded from the body.

There were so many dead that mass graves needed to be dug and, even then, cemeteries ran out of room for the bodies. The Black Death devastated Europe. In those six years, between a quarter and half of all Europeans were killed. The Middle East was ravaged, too, with the plague killing about one in three Egyptians and Syrians. And it may have also laid waste to parts of central Asia, India and China. Due to the scant records of the 14th century, we will never know the true toll, but our best estimates are that somewhere between 5% and 14% of all the world’s people were killed, in what may have been the greatest catastrophe humanity has seen.

The Black Death was not the only biological disaster to scar human history. It was not even the only great bubonic plague. In AD541 the plague of Justinian struck the Byzantine empire. Over three years, it took the lives of roughly 3% of the world’s people.

When Europeans reached the Americas in 1492, the two populations exposed each other to completely novel diseases. Over thousands of years, each population had built up resistance to their own set of diseases, but were extremely susceptible to the others. The American peoples got by far the worse end of the exchange, through diseases such as measles, influenza and, especially, smallpox.

During the next 100 years, a combination of invasion and disease took an immense toll – one whose scale may never be known, due to great uncertainty about the size of the pre-existing population. We can’t rule out the loss of more than 90% of the population of the Americas during that century, though the number could also be much lower. And it is very difficult to tease out how much of this should be attributed to war and occupation, rather than disease. At a rough estimate, as many as 10% of the world’s people may have been killed.

Centuries later, the world had become so interconnected that a truly global pandemic was possible. Towards the end of the first world war, a devastating strain of influenza, known as the 1918 flu or Spanish flu, spread to six continents, and even remote Pacific islands. About a third of the world’s population were infected and between 3% and 6% were killed. This death toll outstripped that of the first world war.

Yet even events like these fall short of being a threat to humanity’s long-term potential. In the great bubonic plagues we saw civilisation in the affected areas falter, but recover. The regional 25%-50% death rate was not enough to precipitate a continent-wide collapse. It changed the relative fortunes of empires, and may have substantially altered the course of history, but if anything, it gives us reason to believe that human civilisation is likely to make it through future events with similar death rates, even if they were global in scale.

The Spanish flu pandemic was remarkable in having very little apparent effect on the world’s development, despite its global reach. It looks as if it was lost in the wake of the first world war, which, despite a smaller death toll, seems to have had a much larger effect on the course of history.

The full history of humanity covers at least 200,000 years. While we have scarce records for most of these 2,000 centuries, there is a key lesson we can draw from the sheer length of our past. The chance of human extinction from natural catastrophes of any kind must have been very low for most of this time – or we would not have made it so far. But could these risks have changed? Might the past provide false comfort?

Our population now is a thousand times greater than it was for most of human history, so there are vastly more opportunities for new human diseases to originate. And our farming practices have created vast numbers of animals living in unhealthy conditions within close proximity to humans. This increases the risk, as many major diseases originate in animals before crossing over to humans. Examples include HIV (chimpanzees), Ebola (bats), Sars (probably civets or bats) and influenza (usually pigs or birds). We do not yet know where Covid-19 came from, though it is very similar to coronaviruses found in bats and pangolins. Evidence suggests that diseases are crossing over into human populations from animals at an increasing rate.

Modern civilisation may also make it much easier for a pandemic to spread. The higher density of people living together in cities increases the number of people each of us may infect. Rapid long-distance transport greatly increases the distance pathogens can spread, reducing the degrees of separation between any two people. Moreover, we are no longer divided into isolated populations as we were for most of the past 10,000 years.

Together these effects suggest that we might expect more new pandemics, for them to spread more quickly, and to reach a higher percentage of the world’s people.

But we have also changed the world in ways that offer protection. We have a healthier population; improved sanitation and hygiene; preventative and curative medicine; and a scientific understanding of disease. Perhaps most importantly, we have public health bodies to facilitate global communication and coordination in the face of new outbreaks. We have seen the benefits of this protection through the dramatic decline of endemic infectious disease over the past century (though we can’t be sure pandemics will obey the same trend). Finally, we have spread to a range of locations and environments unprecedented for any mammalian species. This offers special protection from extinction events, because it requires the pathogen to be able to flourish in a vast range of environments and to reach exceptionally isolated populations such as uncontacted tribes, Antarctic researchers and nuclear submarine crews.

It is hard to know whether these combined effects have increased or decreased the existential risk from pandemics. This uncertainty is ultimately bad news: we were previously sitting on a powerful argument that the risk was tiny; now we are not.

We have seen the indirect ways that our actions aid and abet the origination and spread of pandemics. But what about cases where we have a much more direct hand in the process – where we deliberately use, improve or create the pathogens?

Our understanding and control of pathogens is very recent. Just 200 years ago, we didn’t even understand the basic cause of pandemics – a leading theory in the west claimed that disease was produced by a kind of gas. In just two centuries, we discovered it was caused by a diverse variety of microscopic agents and we worked out how to grow them in the lab, to breed them for different traits, to sequence their genomes, to implant new genes and to create entire functional viruses from their written code.

This progress is continuing at a rapid pace. The past 10 years have seen major qualitative breakthroughs, such as the use of the gene editing tool Crispr to efficiently insert new genetic sequences into a genome, and the use of gene drives to efficiently replace populations of natural organisms in the wild with genetically modified versions.

This progress in biotechnology seems unlikely to fizzle out anytime soon: there are no insurmountable challenges looming; no fundamental laws blocking further developments. But it would be optimistic to assume that this uncharted new terrain holds only familiar dangers.

To start with, let’s set aside the risks from malicious intent, and consider only the risks that can arise from well-intentioned research. Most scientific and medical research poses a negligible risk of harms at the scale we are considering. But there is a small fraction that uses live pathogens of kinds that are known to threaten global harm. These include the agents that cause the Spanish flu, smallpox, Sars and H5N1 or avian flu. And a small part of this research involves making strains of these pathogens that pose even more danger than the natural types, increasing their transmissibility, lethality or resistance to vaccination or treatment.

In 2012, a Dutch virologist, Ron Fouchier, published details of an experiment on the recent H5N1 strain of bird flu. This strain was extremely deadly, killing an estimated 60% of humans it infected – far beyond even the Spanish flu. Yet its inability to pass from human to human had so far prevented a pandemic. Fouchier wanted to find out whether (and how) H5N1 could naturally develop this ability. He passed the disease through a series of 10 ferrets, which are commonly used as a model for how influenza affects humans. By the time it passed to the final ferret, his strain of H5N1 had become directly transmissible between mammals.

The work caused fierce controversy. Much of this was focused on the information contained in his work. The US National Science Advisory Board for Biosecurity ruled that his paper had to be stripped of some of its technical details before publication, to limit the ability of bad actors to cause a pandemic. And the Dutch government claimed that the research broke EU law on exporting information useful for bioweapons. But it is not the possibility of misuse that concerns me here. Fouchier’s research provides a clear example of well-intentioned scientists enhancing the destructive capabilities of pathogens known to threaten global catastrophe.

Of course, such experiments are done in secure labs, with stringent safety standards. It is highly unlikely that in any particular case the enhanced pathogens would escape into the wild. But just how unlikely? Unfortunately, we don’t have good data, due to a lack of transparency about incident and escape rates. This prevents society from making well-informed decisions balancing the risks and benefits of this research, and it limits the ability of labs to learn from each other’s incidents.

Security for highly dangerous pathogens has been deeply flawed, and remains insufficient. In 2001, Britain was struck by a devastating outbreak of foot-and-mouth disease in livestock. Six million animals were killed in an attempt to halt its spread, and the economic damages totalled £8bn. Then, in 2007, there was another outbreak, which was traced to a lab working on the disease. Foot-and-mouth was considered a highest-category pathogen, and required the highest level of biosecurity. Yet the virus escaped from a badly maintained pipe, leaking into the groundwater at the facility. After an investigation, the lab’s licence was renewed – only for another leak to occur two weeks later.

In my view, this track record of escapes shows that even the highest biosafety level (BSL-4) is insufficient for working on pathogens that pose a risk of global pandemics on the scale of the Spanish flu or worse. Thirteen years since the last publicly acknowledged outbreak from a BSL-4 facility is not good enough. It doesn’t matter whether this is from insufficient standards, inspections, operations or penalties. What matters is the poor track record in the field, made worse by a lack of transparency and accountability. With current BSL-4 labs, an escape of a pandemic pathogen is only a matter of time.

One of the most exciting trends in biotechnology is its rapid democratisation – the speed at which cutting-edge techniques can be adopted by students and amateurs. When a new breakthrough is achieved, the pool of people with the talent, training, resources and patience to reproduce it rapidly expands: from a handful of the world’s top biologists, to people with PhDs in the field, to millions of people with undergraduate-level biology.

The Human Genome Project was the largest ever scientific collaboration in biology. It took 13 years and $500m to produce the full DNA sequence of the human genome. Just 15 years later, a genome can be sequenced for under $1,000, and within a single hour. The reverse process has become much easier, too: online DNA synthesis services allow anyone to upload a DNA sequence of their choice then have it constructed and shipped to their address. While still expensive, the price of synthesis has fallen by a factor of 1,000 in the past two decades, and continues to drop. The first ever uses of Crispr and gene drives were the biotechnology achievements of the decade. But within just two years, each of these technologies were used successfully by bright students participating in science competitions.

Such democratisation promises to fuel a boom of entrepreneurial biotechnology. But since biotechnology can be misused to lethal effect, democratisation also means proliferation. As the pool of people with access to a technique grows, so does the chance it contains someone with malign intent.

People with the motivation to wreak global destruction are mercifully rare. But they exist. Perhaps the best example is the Aum Shinrikyo cult in Japan, active between 1984 and 1995, which sought to bring about the destruction of humanity. It attracted several thousand members, including people with advanced skills in chemistry and biology. And it demonstrated that it was not mere misanthropic ideation. It launched multiple lethal attacks using VX gas and sarin gas, killing more than 20 people and injuring thousands. It attempted to weaponise anthrax, but did not succeed. What happens when the circle of people able to create a global pandemic becomes wide enough to include members of such a group? Or members of a terrorist organisation or rogue state that could try to build an omnicidal weapon for the purposes of extortion or deterrence?

The main candidate for biological existential risk in the coming decades thus stems from technology – particularly the risk of misuse by states or small groups. But this is not a case in which the world is blissfully unaware of the risks. Bertrand Russell wrote of the danger of extinction from biowarfare to Einstein in 1955. And, in 1969, the possibility was raised by the American Nobel laureate for medicine, Joshua Lederberg: “As a scientist I am profoundly concerned about the continued involvement of the United States and other nations in the development of biological warfare. This process puts the very future of human life on earth in serious peril.”

In response to such warnings, we have already begun national and international efforts to protect humanity. There is action through public health and international conventions, and self-regulation by biotechnology companies and the scientific community. Are they adequate?

National and international work in public health offers some protection from engineered pandemics, and its existing infrastructure could be adapted to better address them. Yet even for existing dangers this protection is uneven and under-provided.

Despite its importance, public health is underfunded worldwide, and poorer countries remain vulnerable to being overwhelmed by outbreaks. Biotechnology companies are working to limit the dark side of the democratisation of their field. For example, unrestricted DNA synthesis would help bad actors overcome a major hurdle in creating extremely deadly pathogens. It would allow them to get access to the DNA of controlled pathogens such as smallpox (whose genome is readily available online) and to create DNA with modifications to make the pathogen more dangerous. Therefore, many synthesis companies make voluntary efforts to manage this risk, screening their orders for dangerous sequences. But the screening methods are imperfect, and they only cover about 80% of orders. There is significant room for improving this process, and a strong case for making screening mandatory.

We might also look to the scientific community for careful management of biological risks. Many of the dangerous advances usable by states and small groups have come from open science. And we’ve seen that science produces substantial accident risk. The scientific community has tried to regulate its dangerous research, but with limited success. There are a variety of reasons why this is extremely hard, including difficulty in knowing where to draw the line, lack of central authorities to unify practice, a culture of openness and freedom to pursue whatever is of interest, and the rapid pace of science outpacing that of governance. It may be possible for the scientific community to overcome these challenges and provide strong management of global risks, but it would require a willingness to accept serious changes to its culture and governance – such as treating the security around biotechnology more like that around nuclear power. And the scientific community would need to find this willingness before catastrophe strikes.

Threats to humanity, and how we address them, define our time. The advent of nuclear weapons posed a real risk of human extinction in the 20th century. There is strong reason to believe the risk will be higher this century, and increasing with each century that technological progress continues. Because these anthropogenic risks outstrip all natural risks combined, they set the clock on how long humanity has left to pull back from the brink.

I am not claiming that extinction is the inevitable conclusion of scientific progress, or even the most likely outcome. What I am claiming is that there has been a robust trend towards increases in the power of humanity, which has reached a point where we pose a serious risk to our own existence. How we react to this risk is up to us. Nor am I arguing against technology. Technology has proved itself immensely valuable in improving the human condition.

The problem is not so much an excess of technology as a lack of wisdom. Carl Sagan put this especially well: “Many of the dangers we face indeed arise from science and technology – but, more fundamentally, because we have become powerful without becoming commensurately wise. The world-altering powers that technology has delivered into our hands now require a degree of consideration and foresight that has never before been asked of us.”

Because we cannot come back from extinction, we cannot wait until a threat strikes before acting – we must be proactive. And because gaining wisdom takes time, we need to start now.

I think that we are likely to make it through this period. Not because the challenges are small, but because we will rise to them. The very fact that these risks stem from human action shows us that human action can address them. Defeatism would be both unwarranted and counterproductive – a self-fulfilling prophecy. Instead, we must address these challenges head-on with clear and rigorous thinking, guided by a positive vision of the longterm future we are trying to protect.

This is an edited extract from The Precipice: Existential Risk and the Future of Humanity by Toby Ord, published by Bloomsbury and available at guardianbookshop.com

Sem comentários: