Hi Connor. First, I appreciate your work, and your content has significantly impacted me, so thank you.
I do have one complaint, though. As a technical guy, I thought my role in this saving-humanity project would be to do math of AI safety that would end up robust enough to scale to smarter-than-human AI. I knew from the beginning that governance is more important, but I didn’t think I had a “comparative advantage” (I know you hate this expression). After listening to your "ASI survival handbook" lecture, I started to doubt that position.
I don’t know what else I’m supposed to do though. I applied to the “exceptional talent” on ControlAI’s website, but they (understandably, since they have lots of applicants) ignored me. Do I have to just be “one more protester”, even though I’m not a citizen of any relevant country? Should I just earn-to-give to AI governance organisations? I guess it’s a bit selfish to ask for this kind of personal career advice, and the reply “just be agentic about it” would be totally fair, but still.
This is a very understandable and relatable complaint, you're not alone in having it!
The true answer is that, yeah, you're right. Saving the world is not something that has been institutionalised into low-risk, comfortable, compensated and legible work positions (mostly). There is no simple playbook, list of steps to follow. I wish there was.
This is one of the great promises of a representative democracy. It shouldn't have to be up to each citizen to deal with societal-scale issues. They should have a minimum set of civic duties (such as voting, and informing and pressuring their representatives), but beyond that not HAVE to worry about the details of these complex issues, and instead be able to rest comfortably knowing that "adults are in the room", that experts and policy makers will handle it.
Of course, that's not always how things work out. (though it is worth appreciating that for many problems that our ancestors used to have to deal with, this does happen!)
So the first thing is: One more voice, one more protestor, one more vote, does matter. That's the promise of democracy.
But that's an unsatisfying answer, of course, because you want to do more.
So, very concretely speaking:
- The field of AI policy is extremely money constrained, and most of the money that is present is controlled opposition. As such, if you have ways of pushing significant amounts of funds (>$100K) into the field (by giving to an existing org, or starting a new one yourself), whether by donating yourself or activating sources of funding you have access to, that's a very useful thing to do and feel free to get in touch with me if you want to do that!
- Due to the lack of funding, if you can't activate your own funding, there are very few jobs at very few orgs that are worth doing. You should apply, but are likely out of luck if you want to do this full time and get paid.
- As such, you likely will want to keep a day job, and from there the minimum you can do is sign up for microcommit.io and if you want to do more than that, check out torchbearer.community
Thank you for thoughtful reply. I do not yet have a day job, I was kinda hoping to land AI safety job or get funding for research. And I certainly do not have access to large amounts of money, so getting a job to earn-to-give is the plan.
This new role seems an excellent fit for you. I am enormously heartened by the existence of good-souled people earnestly and unashamedly prioritising human flourishing in all its glimmering, messy, non-algorithmic beauty. Thank you for everything you do!
As the saying goes, “when the facts change, I change my mind. What do you do?” From my point of view, the facts have changed. Humanity is bottlenecked far more on institutional malaise, cultural nihilism (both on the left and the right) and pervasive cowardice, than it is on technical research.
What exactly makes you think this way now? Do you think that there has actually been some significant progress on alignment that does not make focusing on that so pressing anymore? Or just that getting humans coordinated on control and regulation is better use of your time? Or some third option?
I hope you also are able to keep on posting about more esoteric stuff. Thank you for your service!
I think there has been the opposite of progress on alignment: The problem is indeed as hard or harder than I thought, almost no one is even trying to make progress on it and most that are trying are following dead-end directions.
Three reasons come to mind why I focus on political work now:
1. This is actually the problem we need to solve. The problem that needs solving is how to build institutions and a civilization that can responsibly handle and steward powerful technology. There is simply no good future without this. The ridiculous transhumanist powerfantasy of "some unilateralist nerds in a basement build an 'aligned' superintelligence that (violently) overthrows the world and makes it 'good'" (sometimes called a "pivotal act") is not just a cartoonishly infeasible plan, but also cartoonishly evil.
2. ControlAI has demonstrated real, tangible, externally verifiable progress with extremely limited resource, and I am well suited to this kind of work. I have more actually grounded and verified evidence of this approach being tractable and sufficient than I have of any other by a mile.
3. Ultimately, we are running out of time. Technical research is a thing you do when you are time and resource rich. If you are time-poor, you need things that can flip (at least in principle) from "totally impossible" to "totally won" very quickly. One of the only things that has this shape is politics. Everything is impossible in politics until it very suddenly isn't (and everyone then pretends they had of course always believed it was possible). It was "impossible" to lockdown during COVID until very suddenly the entire world took completely unprecedented large actions from one day to the next.
--
There is much more that can be said here, of course. The more esoteric/emotional things can be hopefully deduced from my Nostos series.
Thanks for the questions and glad you enjoy the writing :)
Congratulations on your new role with ControlAI and the move to Washington DC.
I read your Conjecture retrospective—your willingness to examine what worked and what didn’t, and to shift direction toward institutional approaches to AI control, stands out.
Thanks for writing this. Regarding footnote 3, are you alluding to "The Spectre", or do you have something completely different in mind? If you have something that everyone needs to hear, I would welcome any elaboration!
That's the main thing! There are also various personal (and stupid) reasons why that cluster dislikes me, but it's mostly "they are trying to build superintelligence and I think that is bad and am willing to fight them on it."
Connor, this is deeply saddening news. Among those who carry the "doomer" label, you and a handful of others stand out for one simple reason: substance. I have no patience for bloggers running a self-marketing operation dressed up as alignment concern. You are not that.
And nobody with any intellectual honesty can judge you harshly for a business failure. Many serious people have crossed that line with you, the place where diminishing returns start to dominate the thought process instead of what you actually want to do: think, build, and develop ideas that matter. You are in the early morning of what can be an extraordinary career and body of contributions.
On the larger question, I believe the alignment problem may well be solvable, at least as it applies to LLMs. But I also believe their current dominance is fragile. What replaces them may be a narrower, more decentralized ecosystem, lean enough and distributed enough to be genuinely democratic, and perhaps lean enough to sidestep the existential consequences so many fear.
Hi Connor. First, I appreciate your work, and your content has significantly impacted me, so thank you.
I do have one complaint, though. As a technical guy, I thought my role in this saving-humanity project would be to do math of AI safety that would end up robust enough to scale to smarter-than-human AI. I knew from the beginning that governance is more important, but I didn’t think I had a “comparative advantage” (I know you hate this expression). After listening to your "ASI survival handbook" lecture, I started to doubt that position.
I don’t know what else I’m supposed to do though. I applied to the “exceptional talent” on ControlAI’s website, but they (understandably, since they have lots of applicants) ignored me. Do I have to just be “one more protester”, even though I’m not a citizen of any relevant country? Should I just earn-to-give to AI governance organisations? I guess it’s a bit selfish to ask for this kind of personal career advice, and the reply “just be agentic about it” would be totally fair, but still.
This is a very understandable and relatable complaint, you're not alone in having it!
The true answer is that, yeah, you're right. Saving the world is not something that has been institutionalised into low-risk, comfortable, compensated and legible work positions (mostly). There is no simple playbook, list of steps to follow. I wish there was.
This is one of the great promises of a representative democracy. It shouldn't have to be up to each citizen to deal with societal-scale issues. They should have a minimum set of civic duties (such as voting, and informing and pressuring their representatives), but beyond that not HAVE to worry about the details of these complex issues, and instead be able to rest comfortably knowing that "adults are in the room", that experts and policy makers will handle it.
Of course, that's not always how things work out. (though it is worth appreciating that for many problems that our ancestors used to have to deal with, this does happen!)
So the first thing is: One more voice, one more protestor, one more vote, does matter. That's the promise of democracy.
But that's an unsatisfying answer, of course, because you want to do more.
So, very concretely speaking:
- The field of AI policy is extremely money constrained, and most of the money that is present is controlled opposition. As such, if you have ways of pushing significant amounts of funds (>$100K) into the field (by giving to an existing org, or starting a new one yourself), whether by donating yourself or activating sources of funding you have access to, that's a very useful thing to do and feel free to get in touch with me if you want to do that!
- Due to the lack of funding, if you can't activate your own funding, there are very few jobs at very few orgs that are worth doing. You should apply, but are likely out of luck if you want to do this full time and get paid.
- As such, you likely will want to keep a day job, and from there the minimum you can do is sign up for microcommit.io and if you want to do more than that, check out torchbearer.community
Good luck!
Thank you for thoughtful reply. I do not yet have a day job, I was kinda hoping to land AI safety job or get funding for research. And I certainly do not have access to large amounts of money, so getting a job to earn-to-give is the plan.
Good luck Connor, LFG!!!
This new role seems an excellent fit for you. I am enormously heartened by the existence of good-souled people earnestly and unashamedly prioritising human flourishing in all its glimmering, messy, non-algorithmic beauty. Thank you for everything you do!
As the saying goes, “when the facts change, I change my mind. What do you do?” From my point of view, the facts have changed. Humanity is bottlenecked far more on institutional malaise, cultural nihilism (both on the left and the right) and pervasive cowardice, than it is on technical research.
What exactly makes you think this way now? Do you think that there has actually been some significant progress on alignment that does not make focusing on that so pressing anymore? Or just that getting humans coordinated on control and regulation is better use of your time? Or some third option?
I hope you also are able to keep on posting about more esoteric stuff. Thank you for your service!
I think there has been the opposite of progress on alignment: The problem is indeed as hard or harder than I thought, almost no one is even trying to make progress on it and most that are trying are following dead-end directions.
Three reasons come to mind why I focus on political work now:
1. This is actually the problem we need to solve. The problem that needs solving is how to build institutions and a civilization that can responsibly handle and steward powerful technology. There is simply no good future without this. The ridiculous transhumanist powerfantasy of "some unilateralist nerds in a basement build an 'aligned' superintelligence that (violently) overthrows the world and makes it 'good'" (sometimes called a "pivotal act") is not just a cartoonishly infeasible plan, but also cartoonishly evil.
2. ControlAI has demonstrated real, tangible, externally verifiable progress with extremely limited resource, and I am well suited to this kind of work. I have more actually grounded and verified evidence of this approach being tractable and sufficient than I have of any other by a mile.
3. Ultimately, we are running out of time. Technical research is a thing you do when you are time and resource rich. If you are time-poor, you need things that can flip (at least in principle) from "totally impossible" to "totally won" very quickly. One of the only things that has this shape is politics. Everything is impossible in politics until it very suddenly isn't (and everyone then pretends they had of course always believed it was possible). It was "impossible" to lockdown during COVID until very suddenly the entire world took completely unprecedented large actions from one day to the next.
--
There is much more that can be said here, of course. The more esoteric/emotional things can be hopefully deduced from my Nostos series.
Thanks for the questions and glad you enjoy the writing :)
Thank you for sharing your story and learnings. Glad to have you continuing to work towards a good future as head of ControlAI US.
We are often doing what works in the present until a better opportunity arises.
Connor,
Congratulations on your new role with ControlAI and the move to Washington DC.
I read your Conjecture retrospective—your willingness to examine what worked and what didn’t, and to shift direction toward institutional approaches to AI control, stands out.
Wishing you success in this next phase.
Thanks for writing this. Regarding footnote 3, are you alluding to "The Spectre", or do you have something completely different in mind? If you have something that everyone needs to hear, I would welcome any elaboration!
That's the main thing! There are also various personal (and stupid) reasons why that cluster dislikes me, but it's mostly "they are trying to build superintelligence and I think that is bad and am willing to fight them on it."
Assuming ControlAI applied for OpenPhil funding, what was the stated reason for rejection?
Connor, this is deeply saddening news. Among those who carry the "doomer" label, you and a handful of others stand out for one simple reason: substance. I have no patience for bloggers running a self-marketing operation dressed up as alignment concern. You are not that.
And nobody with any intellectual honesty can judge you harshly for a business failure. Many serious people have crossed that line with you, the place where diminishing returns start to dominate the thought process instead of what you actually want to do: think, build, and develop ideas that matter. You are in the early morning of what can be an extraordinary career and body of contributions.
On the larger question, I believe the alignment problem may well be solvable, at least as it applies to LLMs. But I also believe their current dominance is fragile. What replaces them may be a narrower, more decentralized ecosystem, lean enough and distributed enough to be genuinely democratic, and perhaps lean enough to sidestep the existential consequences so many fear.
Keep going. The work is not finished.
Congrats Connor!!!!