From wikipedia "the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators... approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control."
Here is an introductory video.
| Indicator | Value |
|---|---|
| Stars | ★★★☆☆ |
| Platform | Metaculus |
| Number of forecasts | 494 |
From wikipedia "the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators... approaches to the control problem include...