MetaforecastStatus
SearchToolsAbout

‌

‌
‌
‌
‌
‌
‌

We create something that’s more intelligent than humanity in the next 100 years

X-risk estimates
★★☆☆☆
50%
About Even
Yes

Question description #

Actual estimate: ~50%

Basically, you can look at my [estimate that the existential risk from AI in the next 100 years is] 10% as, there’s about a 50% chance that we create something that’s more intelligent than humanity this century. And then there’s only an 80% chance that we manage to survive that transition, being in charge of our future. If you put that together, you get a 10% chance that’s the time where we lost control of the future in a negative way.

Toby Ord: With that number, I’ve spent a lot of time thinking about this. Actually, my first degree was in computer science, and I’ve been involved in artificial intelligence for a long time, although it’s not what I did my PhD on. But, if you ask the typical AI expert’s view of the chance that we develop smarter than human AGI, artificial general intelligence, this century is about 50%. If you survey the public, which has been done, it’s about 50%. So, my 50% is both based on the information I know actually about what’s going on in AI, and also is in line with all of the relevant outside views. It feels difficult to have a wildly different number on that. The onus would be on the other person.

Indicators #

IndicatorValue
Stars
★★☆☆☆
PlatformX-risk estimates

Capture #

Resizable preview:
We create something that’s more intelligent than humanity in the next 100 years
50%
About Even
Last updated: 2026-01-17

Actual estimate: ~50%

Basically, you can look at my [estimate that the existential risk from AI in the next 100 years is] 10% as, there’s about a 50% chance that we create something that’s more intelligent than humanity this century. And then...

Last updated: 2026-01-17
★★☆☆☆
X-risk estimates

Embed #

<iframe src="https://metaforecast.org/questions/embed/xrisk-597f2accf5" height="600" width="600" frameborder="0" />

Preview