Wheel of Existential Risk
Choose an existential risk.
Toby Ord estimates in The Precipice that the probability of an existential catastrophy in the next 100 years is about 1 in 6 or 16.67%.
What is existential risk?
In the 20th century we reached an era where for the first time, humans could completely destroy civilization. Extinction would not only be catastrophic for the people who suffer the direct consequences, but it would wipe out the possibility of millions of future generations who would otherwise become to exist. There is a growing movement of people and communities who view safeguarding humanity’s future as the top priority for our civilization.
Many people don’t yet think that our future is at risk. Over 30% of Americans surveyed on the topic estimated the risk for extinction to be lower than 1 in 10 million. However, existential risks are thought to be a thousand times higher by researchers who study these issues. Most of these risks are human-cause which also suggests the the prevention of these catastrophes from happening is up to us.
Read more about existential risks: https://80000hours.org/articles/existential-risks/
Spending comparison
The chart compares different existential risks by their risk estimates and funding currently allocated to reduce the risks.
Hover over the bubbles to see information about both high and low funding estimates.
Data source: 80000hours problem profiles
What can I do?
- Find out more about existential risks
- Give more effectively
- Follow the work of organisations working on safeguarding humanity’s future:
- Join a community of people working towards solving humanity’s most pressing problems: effectivealtruism.org
Toby Ord estimates in The Precipice that the probability of an existential catastrophy as a result of unaligned AI in the next 100 years is about 1 in 10 or 10%.
Why is this an existential risk?
Artificial Intelligence (AI) today can perform narrow tasks like driving a car or recognizing faces. However, researchers are working towards creating a general AI (AGI) which would outperform humans at most, if not all cognitive tasks. Even though such an AI system could have tremendous positive impact (e.g. curing diseases or helping solve complex problems), highly intelligent AI could also have large negative effects, possibly leading us to extinction.
Two scenarios are thought to be most likely:
- In the first scenario, AI is purposefully designed to kill. For example, autonomous weapons could easily cause mass casualties and could be designed to be extremely difficult to turn off. In such a scenario, humans could easily lose control of the weapons.
- In the second scenario, the AI is designed to perform a beneficial task but because we can’t fully align the AI’s goals with our own, it will become destructive in its attempt to reach its goal.
Spending comparison
Despite currently estimated to be the highest existential risk, preventing potential damage from artificial intelligence is neglected and not much money is spent on it. As seen from the graph, even by a low estimate, 1,000 - 10,000 times more is spent on reducing climate change risks than on reducing AI risks.
Hover over the bubbles to see information about both high and low funding estimates.
Data source: 80000hours problem profiles
What is being done about this?
Much of what is done and what needs to be done about reducing AI risks concerns research. Many organisations do technical research and also research about AI policy and strategy. For example, the Future of Life Institute together with labs like DeepMind, OpenAI, and CHAI are researching techniques for building AI systems aligned with human values. Future of Life Institute also researches possible ways of how humanity can navigate the transition to AGI systems.
What can I do?
- Find out more about AI risks
- Give more effectively
- Follow the work of organisations working on safeguarding humanity’s future:
- Join a community of people working towards solving humanity’s most pressing problems: effectivealtruism.org
Toby Ord estimates in The Precipice that the probability of an existential catastrophy as a result of engineered pandemics in the next 100 years is about 1 in 30 or 3.33%.
Why is this an existential risk?
Pandemics like the bubonic plague, 1918 influenza pandemic and most recently, covid-19 have all caused large-scale global catastrophes with millions of deaths. In the future, we will likely see more pandemics. However, it is unlikely that natural pandemics could threaten extinction. Rapid increase in biotechnology makes it much more likely that artificial biological risks such as engineered pandemics pose a greater danger to humanity.
Artificially created pathogens could be modified to enhance their danger, e.g. making them more deadly, more infectious or more resistant to vaccines.
The more available biotechnology becomes, the higher the risk that highly dangerous pathogens are produced, either by accident or on purpose.
Spending comparison
Funding for reducing existential risk from engineered pandemics is neglected compared to funding allocated to preventing risks from climate change and nuclear war. Yet, engineered pandemics are estimated to be a more likely risk to humanity’s survival.
Hover over the bubbles to see information about both high and low funding estimates.
Data source: 80000hours problem profiles
What can I do?
- Find out more about biological risks and engineered pandemics
- Give more effectively
- Follow the work of organisations working on safeguarding humanity’s future:
- Join a community of people working towards solving humanity’s most pressing problems: effectivealtruism.org
Toby Ord estimates in The Precipice that the probability of an existential catastrophy as a result of runaway climate change in the next 100 years is about 1 in 1,000 or 0.1%.
Why is this an existential risk?
Even if countries follow their commitments under the Paris Agreement to reduce emissions, there is still an estimated 50% chance of global temperatures rising more than 3.5ºC by the year 2100. There are smaller, but nevertheless worrying chances that the temperature could increase even more. The higher the temperature, the more harm it will cause. More extreme scenarios (warming of 6ºC or higher) would lead to significant crop failure and large water shortages, resulting in millions of deaths and large-scale emigration.
There is also a small possibility that climate change will cause the irreversible effect of human extinction. For example, Earth could become so hot that it is uninhabitable. Another scenario is where climate change increases the risk of other existential risks by, for example, creating problems leading to national and international conflict.
Spending comparison
Climate change is the least neglected existential risk, even though there are more likely existential risk scenarios. As seen from the graph, even by a low estimate, 1,000 - 10,000 times more is spent on reducing climate change risks than on reducing AI risks.
Hover over the bubbles to see information about both high and low funding estimates.
Data source: 80000hours problem profiles
What can I do?
- Find out more about extreme risks from climate change
- Give more effectively
- Follow the work of organisations working on safeguarding humanity’s future:
- Join a community of people working towards solving humanity’s most pressing problems: effectivealtruism.org
Toby Ord estimates in The Precipice that the probability of an existential catastrophy as a result of nuclear war in the next 100 years is about 1 in 1,000 or 0.1%.
Why is this an existential risk?
Nuclear weapons have the potential to directly kill hundreds of millions of people, and possibly billions due to long-term effects of nuclear war.
There are several worrying examples from history where powerful countries like the USA or Russia have been very close to starting a nuclear war, either by accident or deliberately.
The chance of nuclear war in the next 200 years is estimated to be between 2 and 20%. However, there is a chance,albeit small that nuclear war will lead to human extinction.
Spending comparison
Existential risk from nuclear war receives up to 1,000 times more funding than the more neglected, but more likely existential risks like AI risk and engineered pandemics. Nevertheless, nuclear war risk prevention is a much more neglected field than climate change risk prevention.
Hover over the bubbles to see information about both high and low funding estimates.
Data source: 80000hours problem profiles
What can I do?
- Find out more about nuclear security
- Give more effectively
- Follow the work of organisations working on safeguarding humanity’s future:
- Join a community of people working towards solving humanity’s most pressing problems: effectivealtruism.org