Discussion about this post

User's avatar
Nathan Metzger's avatar

The good ending is hard-locked behind the bad ending, unless we are afforded the opportunity to spend a very long time working very hard to solve a series of wildly difficult problems.

LambdaSaturn's avatar

Do you think there still can be room for people doing conceptual AI safety research (like Agent Foundations), because the ban would get harder to enforce over time and humanity would have to face those problems eventually?

In case we do succeed with the ban, I think we still need to have experts ready to work on alignment as soon as we can, and the only way to create them is to have people do research now.

1 more comment...

No posts

Ready for more?