There has been extensive debate in the AI safety community about whether AGI will cause human extinction.* This question asks about how likely that is.
(*The debate has been more nuanced than this: Some think that advanced AI that is highly capable in a narrow set of domains—i.e., not generally intelligent enough to qualify as AGI—could cause human extinction. Meanwhile, others believe that AI will only become extinction-level dangerous once it is “superintelligent”—that is, well beyond AGI-level. For simplicity, though, we ask only about AGI in this question.)
Indicator | Value |
---|---|
Stars | ★★★☆☆ |
Platform | Metaculus |
Number of forecasts | 102 |
There has been extensive debate in the AI safety community about whether AGI will cause human extinction.* This question asks about how likely that is.
(*The debate has been more nuanced than this: Some think that advanced AI that is highly capable...
<iframe src="https://metaforecast.org/questions/embed/metaculus-26244" height="600" width="600" frameborder="0" />