8 Comments
User's avatar
Greg G's avatar

Does the world need to be set up like that? It's not clear to me that it does.

Connor Leahy's avatar

I don't think it does, but if we want an alternative, we do have to actually put in the (enormous) effort to build and enforce the alternative.

Greg G's avatar

It seems like we have to create or recreate a viable alternative. It exists for kids and retired people. I think the amount of self-worth we take from our jobs could and probably should look pretty dumb in hindsight.

Tyler's avatar

Wait, so you’re saying I will be dead AND I won’t have the power vested in me by my assistant regional manager position? Occupy AWS.

Tomás Bjartur's avatar

He’s back!

Kurt Pieper's avatar

I have a feeling something flew over my head here. I don't expect a big gap between 'AI can do most jobs' and 'AI is more powerful than all humans', and I don't think you do either?

Connor Leahy's avatar

Yes, I said so in the first few sentences. This is simply giving a fair assessment of why people might/should care about this topic, independent of ASI xrisk.

Ken Z's avatar

Connor — I think your concerns are very valid.

If current trajectories hold, the risks you’re pointing to are real.

I do think there’s still a path forward — but it likely requires a shift at the architectural level.

Not alignment as something we train toward, but alignment enforced as a rule of function — where systems are structurally bound to their directives and cannot drift beyond them.

At the same time, allowing bounded exploration within controlled environments — so the system can reason and learn, but never outside of governed constraints.