Personal update: EA entrepreneurship, mental health, and what's next
tl;dr In the last 6 months I started a forecasting org, got fairly depressed and decided it was best to step down indefinitely, and am now figuring out what to do next. I note some lessons I’m taking away and my future plans.
The High
In December of last year, I decided I wanted to do something besides software engineering. I realized that I was more excited to read, write, think, etc. than write and debug code. I also felt the bottleneck to progress in longtermist domains seemed to be strategic clarity, particularly in AI safety, for which software didn’t seem to be the main barrier for achieving. I deliberated about it for a bit but decided I should give it a shot.
I quit my job at Ought in January, and immediately began working on a proposal for a new forecasting project aimed at crushing the bottlenecks to impact from crowd forecasting I wrote about here. I wrote a proposal with Misha Yagudin (to be part-time cofounder) and recruited a talented friend Aaron Ho to be technical cofounder. I also considered some research options, but ultimately decided on trying to start up a new project/org. I was pretty excited about it, it seemed like a nice ambitious thing to test my fit for entrepreneurship, and got some encouraging feedback as we iterated on the proposal.
Through mid-late April, everything was going pretty great for Sage (the name we decide on for the new org) and I was mostly really enjoying it and was feeling confident we could do something great. We had:
- Wrapped up the Impactful Forecasting Prize, the first pilot of our ideas, which didn’t get as many entries as we hoped but had a few high quality submissions.
- Made an alpha version of our forecasting platform, which we used to host the nuclear risk forecasts I helped organize and write up.
- Made substantial progress on writing AI risk forecasting questions relevant to governance for a pilot tournament, working with Matthijs Maas.
- I ran a forecasting workshop at EAGx Oxford and Aaron and I attended EAG London.
- Gotten a grant from the FTX Future Fund to create a pilot platform and run pilots of paying top forecasters to predict on longtermist-relevant questions.
The Low
I started to have consistent issues with my mental health in late April, related to starting to lose conviction in the value of the pilots we were working on. We did hit some difficulties in the pilots and things could have been moving faster, but nothing that warranted too strong of a reaction; the important thing should be to learn and improve and not expect the first try to succeed exactly. But rather than charting the course through the difficulties, I started to get anxiety around the issues we were facing. This fairly quickly extended to pessimism about Sage and lower self-confidence, and I fell into feedback loops around lack of productivity and bad thought patterns.
I struggled with depression for much of the past few months. I was getting very little done and wasn’t having a good time. At first I thought it might go away fairly quickly e.g. after a few good days of making progress, and perhaps this was just fairly common among founders. But day after day I continued bad habits, both in thought patterns and productivity / lack of focus. I tried exercising, antidepressants, and taking days off but nothing helped much. I had some trouble making even small decisions, but the one that agonized me the most was what to do about myself and Sage. I felt pretty shitty but didn’t want to give up so soon after starting (failing fast given new evidence seems good but this felt a bit too fast and due to a fairly embarrassing form of evidence re: my mental state and stability).
After talking it over with Aaron and Misha I eventually decided to step down indefinitely; they are excited to keep working on Sage on overcoming the bottlenecks we’ve encountered. I’ll continue to help as a very part-time advisor as I can.
I’m still working on figuring out and addressing the root cause of my mental health issues but I think this was the best choice because (a) it was hard to imagine me leading Sage to great success in anything close to the state I’d been in for 2.5 months and (b) it seemed good for me to try reducing the forcefulness of obligations I felt from my work[1]. And in fact, it has helped some to reduce obligations.
Lessons
Some personal lessons I’m taking away from this (that may or may not generalize to others, don’t take one data point too seriously):
- I probably jumped too quickly to my next thing after quitting Ought. I took pretty much no time off, and I also took little time off after college before starting an internship then full-time at Ought. I may have some burnout, plus more reflection might have led to a different decision.
- I’m updating fairly negatively on my fit for entrepreneurship, and very negatively on my fit for entrepreneurship if I suspect I’m not very emotionally stable (I could imagine starting another thing at some point, but probably not until at least a year of better mental health).
- Relatedly, I tend to get overly pessimistic about projects I work on, in general (before I thought if I got ~full autonomy on what I wanted to work on this wouldn’t be as much of an issue). I should watch out for this and proactively work on maintaining a more healthy attitude.
- I’ve been working remotely in areas where I also didn’t have much of an in-person friend group. This was probably pretty bad for my mental health and I intend to fix this.
What next?
I’m intuitively pessimistic on Tetlock-style crowd forecasting for the time being; on the other hand I’m fascinated by the ongoing debates around AI alignment[2] and it also seems very important to make progress on them. I previously worked on using AI to improve epistemics at Ought (btw I think it would be awesome to see more “AI-based cognitive aids” as Future Fund describes them, particularly excited about longtermist/alignment research aids), so it’s only natural that I try out working on the epistemics of AI :p.
I want to further develop my views on the space, key cruxes, and how to make further progress (e.g. distilling and/or giving my takes on core disagreements in the reviews of Carlsmith’s report, or the MIRI conversations). I’ll work at a fairly chill pace and take at least a week off soon. At some point I’ll re-evaluate whether I want to apply for grants and/or jobs, try something else, return to Sage full-time, etc. I might be open to contract work that seems especially exciting ~immediately but will be hesitant to commit to anything long-term until I’m more confident about my mental health.
By the way, I’m visiting the Bay through this weekend and may be back indefinitely soonish; reach out if you’re interested in hanging out! Feel free to also reach out if you’d like to chat on a call about anything related to my experience (EA entrepreneurship, mental health, forecasting, AI epistemics) etc. And more than ever I’d love anonymous feedback here.
Comment on the EA Forum shortform here
Notes
From Lorien’s depression page: “By far the most powerful treatment for externally-caused depression is GETTING AWAY FROM THE DEPRESSING THING” ↩︎
Some of my favorite illustrations of the wild distribution of views on plausibly the most important topic to make progress on right now are the table here and this chart. ↩︎