Will global-catastrophic-risk-focused evaluation of certain AI systems by accredited bodies become mandatory in the US before 2035?

Metaculus
★★★☆☆
62%
Likely
Yes

Question description

The evaluation of newly developed AI systems before deployment by organizations specializing in this task has been proposed as a strategy for mitigating catastrophic risks posed by such systems [1]. This idea has gained traction – GPT-4 was evaluated for power-seeking tendencies and potentially catastrophic capabilities by AI risk assessment organization ARC Evals [2].

As of March 2023, evaluations of this kind are carried out on a purely voluntary basis, and, particularly under conditions of an AI race, some actors might be tempted to forego them. However, with the passage of the Global Catastrophic Risk Management Act, the US government is showing increased interest in mitigating catastrophic risks [3], and it seems possible that such evaluations may at some point become required by US law.

Indicators

IndicatorValue
Stars
★★★☆☆
PlatformMetaculus
Number of forecasts53

Capture

Resizable preview:
62%
Likely

The evaluation of newly developed AI systems before deployment by organizations specializing in this task has been proposed as a strategy for mitigating catastrophic risks posed by such systems [1]. This idea has gained traction – GPT-4 was evaluated...

Last updated: 2024-10-01
★★★☆☆
Metaculus
Forecasts: 53

Embed

<iframe src="https://https://metaforecast.org/questions/embed/metaculus-15615" height="600" width="600" frameborder="0" />