MetaforecastStatus
SearchToolsAbout

‌

‌
‌
‌
‌
‌
‌

Chance that AI, through “adversarial optimization against humans only”, will cause existential catastrophe, conditional on there not being “additional intervention by longtermists” (or perhaps “no intervention from longtermists”)

X-risk estimates
★★☆☆☆
10%
Unlikely
Yes

Question description #

Actual estimate: ~10%

This is my interpretation of some comments that may not have been meant to be taken very literally. I think he updated this in 2020 to ~15%, due to pessimism about discontinuous scenarios: https://www.lesswrong.com/posts/TdwpN484eTbPSvZkm/rohin-shah-on-reasons-for-ai-optimism?commentId=n577gwGB3vRpwkBmj Rohin also discusses his estimates here: https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/

Indicators #

IndicatorValue
Stars
★★☆☆☆
PlatformX-risk estimates

Capture #

Resizable preview:
Chance that AI, through “adversarial optimization against humans only”, will cause existential catastrophe, conditional on there not being “additional intervention by longtermists” (or perhaps “no intervention from longtermists”)
10%
Unlikely
Last updated: 2025-11-21

Actual estimate: ~10%

This is my interpretation of some comments that may not have been meant to be taken very literally. I think he updated this in 2020 to ~15%, due to pessimism about discontinuous scenarios: https://www.lesswrong..

Last updated: 2025-11-21
★★☆☆☆
X-risk estimates

Embed #

<iframe src="https://metaforecast.org/questions/embed/xrisk-93136e3d78" height="600" width="600" frameborder="0" />

Preview