Will global-catastrophic-risk-focused evaluation of certain AI systems by accredited bodies become mandatory in the US before 2035?
Question description
The evaluation of newly developed AI systems before deployment by organizations specializing in this task has been proposed as a strategy for mitigating catastrophic risks posed by such systems [1]. This idea has gained traction – GPT-4 was evaluated for power-seeking tendencies and potentially catastrophic capabilities by AI risk assessment organization ARC Evals [2].
As of March 2023, evaluations of this kind are carried out on a purely voluntary basis, and, particularly under conditions of an AI race, some actors might be tempted to forego them. However, with the passage of the Global Catastrophic Risk Management Act, the US government is showing increased interest in mitigating catastrophic risks [3], and it seems possible that such evaluations may at some point become required by US law.
Indicators
Indicator | Value |
---|---|
Stars | ★★★☆☆ |
Platform | Metaculus |
Number of forecasts | 53 |
Capture
The evaluation of newly developed AI systems before deployment by organizations specializing in this task has been proposed as a strategy for mitigating catastrophic risks posed by such systems [1]. This idea has gained traction – GPT-4 was evaluated...
Embed
<iframe src="https://https://metaforecast.org/questions/embed/metaculus-15615" height="600" width="600" frameborder="0" />