Conjecture: A Retrospective
Four years and on to a new chapter!
Four years ago, I started a company. That company’s name was Conjecture.
It was ambitious, audacious, at times crazy. It was the best four years of my life, and I have learned a lot. But now sadly, this chapter draws to a close.
I always really appreciate when other people that have done ambitious things shared their experiences, what happened, what went well or wrong, what they would have done differently, what they have learned.
So, in that spirit, here is some history and reflections on four years of Conjecture!
History
Part 0: 2021 - EleutherAI and Founding
Conjecture grew out of my desire to build something more, something bigger, better, more ambitious, after EleutherAI. We had done a lot of impressive things, including building some of the very first fully open source LLMs.1
But the anarchic, volunteer driven nature of EleutherAI was hitting its limits, and we needed more in order to do what I initially had always wanted to do, which is to help build a better world with safe, controllable AI.
The stars aligned when I was approached by our first investor, Nat, and we discussed whether I had ever considered starting a company. I, in fact, had been being harassed nonstop for almost 18 months by my friend Gabriel to start a company with him, and I was hitting the limits of what EleutherAI could do in my ambitions, and so the timing couldn’t have been better.
Recruiting my good friend and technical genius Sid, the initial trio was complete, and along with a small group of other EleutherAI veterans, Conjecture was born.
Part 1: 2022 - Try Everything
Conjecture officially kicked off March 1st, 2022, with me and Sid moving to the UK that very day. Our goal was as nebulous as it was ambitious: Solve technical AI safety, or die trying.
And so, we took the first year to try as many different approaches as we could as quickly as possible. We built cutting edge LLM infrastructure, did interpretability experiments, explored the epistemology of alignment, ran a research incubator, investigated just about every alignment proposal under the sun, pioneered Simulator Theory and much, much more.
Ultimately, our conclusion was: All of these approaches are terrible.
No one has a plan, no one is making any meaningful progress towards anything that even resembles alignment. So we went back to the drawing board.
Alignment is way too hard, maybe impossible. What is a goal that would be meaningful and achievable?
In the fall/winter of 2022 came the first breakthrough. Our target would be boundedness: The ability to know the bounds of a system, what a system cannot do. If we could predictably build AIs that we know how to bound, this would give us the fundamental primitive necessary to build a new paradigm of controllable AI.
We named our approach to achieving boundedness Cognitive Emulation (“CoEm”).
Part 2: 2023 - Rocky Transition
CoEm as a research agenda was far outside the overton window of the time2, and although me and Gabe were fully bought in on the vision, the team still wanted to keep some diversification.
And so throughout 2023, we continued to bit by bit cull non-productive research projects, while also making our first attempts at commercial products. We also during this time built our first state of the art LLMs.
This transition wasn’t easy, and frustration from the research team over mounting setbacks and failures grew.
Conjecture came to a crossroads: All in on CoEm, or something else?
This debate culminated with significant turnover and strategic disagreement, which marked a transition point for Conjecture. It was heartbreaking to see many good people, and friends, leave, but it was the right decision for the company.
We had more ambitious plans in mind, and needed to give it all we’ve got.
Part 3: 2024 - CoEm
2024 was the year when finally the company started to focus. I would say this is without a doubt the time in which we produced the most scientifically novel and interesting work.
We were dedicated to the CoEm agenda. If we could demonstrate a novel way to build AI in a bounded, yet powerful, way, in which we could know the capabilities of an AI system before it was built/deployed, then we would have the technical foundations not just for a killer product, but also for a “safe by construction” regulatory regime, which would allow a vastly economically beneficial form of regulation that still addresses the existential risk from superintelligence effectively.
With this approach, it would become possible to demonstrate one’s AI cannot be superintelligent, and so that it is safe to use, while the AIs that cannot demonstrate this can be simply banned. A win-win for everyone…except the people carelessly gambling with our lives.
Over the summer, we made several noteworthy technical breakthroughs, and in winter, we culminated with the public demonstration of Tactics and our roadmap. I am personally still very proud of this roadmap, it is one of my favorite things I ever produced at Conjecture.
Novel scientific and engineering research is hard, and the rest of the field was advancing at breakneck speed, but it seemed we finally had a handle, a way out…
Part 4: 2025 - Product Pivot
But ultimately, it was too little, too late.
Coming into 2025, we were faced with an incredibly hard choice. We had made progress on CoEm, yes, and we had a concrete roadmap forward.
But the economics just didn’t add up. Doing research on frontier LLMs is incredibly expensive work, and we were still very far from the level of commercialization that would allow us to raise the kinds of mega-rounds that other AI players had.
And so, we made the hard choice to pivot from research to pure play product work.
We targeted a number of growing markets with little well developed competition, and had our first real successes in the commercial space. We started having our first real B2C customers in the early summer, and then branched out towards B2B partnerships.
Unfortunately, in the late summer and early autumn, the competition really took off, and we quickly saw ourselves overtaken by a lot of well funded, extremely dedicated teams with more experience in our targeted niches.
So, for one last hurrah, that December, we decided to do one more rush with the remaining team, two weeks of the most intense work we could give, to see if we could catch up, if we still had a way forward.
But sadly, it was not to be, and the results fell short of the benchmark we had set for ourselves.
Part 5: 2026 - The End?
Overall, we are very happy to have seriously tried technical AI safety at Conjecture, and to have developed CoEm as far as we did. It was extremely useful, and directly informs our views on ASI and xrisk to this day. In another world, it could have changed a lot.
The product work was an attempt to make Conjecture make sense after we stopped believing in technical AI safety. It didn’t pan out, such is life.
Lessons Learnt
In retrospect, what would I do differently? Many, many things! I’ve learnt so much, so many obvious mistakes that could have been fixed, but hindsight is always 20/20.
Things that were definitely mistakes and that I would now do differently:
Fire more. This is by far my biggest mistake and regret. I was way, way too hesitant to fire people, rather than getting into agonizing month long strategy disagreements. This cost us so much, over and over. I should have just asserted the strategy more directly and fired anyone that was unwilling to pull their weight. There are many cases where I tried to make things work even when it was clear months or even years ahead of time that things were not going to work out, to neither my nor the employee’s benefit. I would be far more ruthless in cutting things short.
Focus more. There was way, way too much split attention. There should have been fewer projects with a smaller team putting 100% of their attention and focus into them.
(Micro-) Manage more. I put way too much trust into people to manage themselves. I should have from early on been much harsher in requiring written reports, clear communication of what everyone was doing, what KPIs we were working towards, clear deadlines, etc.
Be more adversarial and use more authority. I was way too good faith and extended way too much patience with people. I too often did not treat sabotage/FUD as what it was, instead looking for compromise or peaceful solutions, instead of just asserting authority and moving on.
Focus more on money. Money is great, it can be exchanged for goods and services. I should have been much more ruthless in optimizing for making money, this would have also had many good downstream effects on getting people to focus on real things rather than useless academic bullshit.
Hire less PhDs. Hiring researchers and PhDs turned out to consistently be pretty terrible. They were chronically academia-brained and had weird status games and incentives that had ~nothing to do with actually trying to win.
Communicate clearer to the public. We sucked at marketing and sales. We had a lot of great ideas and great people, and we failed to communicate them well. I would now in retrospect as the CEO have put more effort into this.
Burn bridges with EA earlier. There are many nice people and things to be said about EA, but ultimately it’s a movement of transhumanist weirdo cultists that lie constantly and see no moral problem with this because they’re “utilitarian.” I severed ties pretty quickly (~end of year 1), but it should have been even earlier.3
Things that might or might not have been mistakes:
Not founding in the USA. I’m conflicted on this one. I think London ultimately treated us well. We got good talent, we raised significant amounts of capital, and we were insulated from the SF brainworms (EA, accelerationism, etc etc) to a degree that let us develop a quite unique and grounded view of AI and AI safety. If we would have been able to withstand the brainworms in SF, it probably would have been better to be in SF. If we wouldn’t have, being in London was 100% the right choice.
Being more mission aligned vs mercenarial. There was an ever-present tradeoff of whether to hire for people that were more aligned with the mission (in particular AI xrisk), or more competent and mercenarial. We ended up being more on the mission aligned end of the spectrum, and we had some good and some bad outcomes as a result. We also had some good, some bad outcomes with more mercenarial employees. Overall, it seems unclear whether we should have been more mercenarial, or even more mission alignment focused. It feels like we ended up in a bit of an awkward in-between.
Things that were not mistakes:
Betting on AI and safety/control. It’s pretty clear to me that we were on to something big, and something that the world wanted and needed, and that no one else was going to build. My predictions for AI were very accurate and ahead of their time, and AI control remains an under-addressed problem. While we could have exploited this way more, ultimately I am happy with our prescience.
Taking a shot with Conjecture, even if it ultimately failed. Best 4 years of my life! We had a real shot, it didn’t work out this time around, but I’m ready to roll the dice again.
The Future
So what is next for Conjecture, and for its alumni?
Ultimately, I came into technology not just because I like technology (though I do), but because I wanted to make the world a better place. To build a better world, for people, for everyone.
This is still what drives me. I continue to think technology plays an important role in a better tomorrow, but it’s not the only thing, or even the most pressing thing, at least at this point in time.
As the saying goes, “when the facts change, I change my mind. What do you do?” From my point of view, the facts have changed. Humanity is bottlenecked far more on institutional malaise, cultural nihilism (both on the left and the right) and pervasive cowardice, than it is on technical research.
—
While Conjecture’s 2025 didn’t pan out the way I had hoped, something dramatic and unexpected happened next door: ControlAI pivoted their strategy in early 2025 to the DIP: A new strategy of simple, straightforward and honest civic action.
And its success has been astounding.
Over 100 lawmakers have supported ControlAI’s campaign to call for binding regulation on superintelligence. I was recently invited by the Canadian House of Commons to testify on the risk of superintelligence. All of this was unimaginable a year ago.
The problem with ASI is not that ASI is killing us. ASI doesn’t exist, yet. It’s that we are allowing unaccountable corporations to build things that threaten the lives of us and everyone we love.
Putting an end to this is a national (and international) security problem, not a technical one.
ControlAI has shown a powerful way forward on civic action, and I think I could provide a lot of value to this approach.
As such, I will be joining ControlAI as Director of ControlAI US. I will be based in Washington DC, where I will be leading efforts to openly, democratically and honestly inform lawmakers of the risks of ASI development, and what can be done to ensure longterm human safety and flourishing.
If there is one thing that I learned from ControlAI’s open, honest advocacy in the UK, it’s that most politicians are decent people trying to do the right thing. They might not know what the right thing is, they might be being lied to, and yes, some of them really are evil bastards.
But by far most of them are decent people that are under enormous pressure, being shouted at from all sides, trying their best with what resources they have to figure out what the right thing to do is.
I want to help these people to find and do what is right.
—
I want to build a better world, for everyone.
There is simply no good future in which we don’t have strong, just, trustworthy institutions that can responsibly steward powerful technology, such as AI.
Progress historically has been downstream of two major forces: Technological progress (writing, steam engines, electricity; the scientists and engineers) and institutional/social progress (better laws, democracy, free markets and regulation, the limited liability corporation, human rights; the statesmen and humanists).
We as a civilization have been doing a poor job lately of finding a way for these two forces to work together productively. We have made a lot of technological progress, but have not found a way to replicate the successes of the founding fathers and the enlightenment.
What would the founding fathers look like if they had had our technology? What amazing new society could they have built? What does a good future look like, that is sincere, hopeful, that one would actually want to live in?
It’s sure as hell neither luddism/degrowth nor whatever anti-human dystopia the accelerationists are trying to sell.
There is a dire need for a better vision, a vision of a future that is humane (good for humans) and human (achievable by humans). And any such vision requires strong, just government and statecraft.
Conclusion
In conclusion, all I can really say, from the bottom of my heart, is thank you! To everyone that was along with me on this crazy ride.
To Nat, who believed in me before anyone else did.
To all of our other investors and sponsors, that allowed this all to happen in the first place.
To my cofounders, without whom I could have never made it this far.
To all my employees, partners, collaborators and friends made along the way. It was an honor and a pleasure to work with you, and I hope our paths cross again.
I hope to see many of you on the next Adventure. I will be moving to the US, to Washington DC. If you’re in the area, give me a shout!
So long, Conjecture.
To another adventure, another year.
May it not be the last!
- Connor
Not just open weight!
And still is today! Though less so, as seen for example through the similar Safeguarded AI project.
And downstream of our not great comms, it’s not even to this day super legible to everyone that me and (core) EA have been basically mortal enemies for years at this point!


