The AGI Control Problem was not included in my list of existential risks—such as post-modernism, nuclear war, and natural catastrophic events—until I discovered that people are willing to invest cash prizes for a theoretical approach to solving the shutdown problem in AI.
Share this post
Revisiting the AI Alignment (Control) Problem…
Share this post
The AGI Control Problem was not included in my list of existential risks—such as post-modernism, nuclear war, and natural catastrophic events—until I discovered that people are willing to invest cash prizes for a theoretical approach to solving the shutdown problem in AI.