The current state of the art of AI safety research is mainly of two sorts: “what if we build an angry God” and “can we make the thing say Heil Hitler?” Neither is very important, because in the first place we’re pretty unlikely to build a God, and in the second place, who cares?
Share this post
AI Safety
Share this post
The current state of the art of AI safety research is mainly of two sorts: “what if we build an angry God” and “can we make the thing say Heil Hitler?” Neither is very important, because in the first place we’re pretty unlikely to build a God, and in the second place, who cares?