I haven’t really written about my life or future plans anywhere yet, so I figured I would write a post describing what I’ve been up to and what I’m going to do in the future.
Past
In September 2022 (the day I turned 18), I started working remotely as a software engineer at a startup where I worked primarily on PostgreSQL database extensions in Rust (although I ended up working on several other things as well), and worked there for 3 years until December 2025 when I quit so I could focus on AI safety and learning more about the world.
At that job, especially during the last year I worked there, I found it really hard to care about and concentrate on what I was doing. There were a lot of things I wanted to do in the world and I didn’t really feel like I could accomplish them while working there. Working on software to manage embedding vectors in PostgreSQL is fun but ultimately pretty low-impact on the world. I ended up having days where I was unproductive and couldn’t bring myself to get anything done.
This year so far
I was in London, UK in January for ARENA (an AI safety upskilling program) where I learned a lot about various technical AI safety topics. Spending six hours pair programming every day was a new experience for me and I enjoyed it. I ended up learning quite a bit and even though all of the material is available online, I don’t think I would have been able to get through all of it without having the structured in-person experience.
After ARENA I spent about a month in Montreal where I mostly handled a bunch of boring miscellaneous tasks, and met some people I know there. There was no particular reason I had to be in Montreal, but you can get fairly cheap temporary housing there and I had wanted to explore the city some more ever since I visited it in 2024 for RustConf.
This month (April) I’m doing Inkhaven, a writing retreat at Lighthaven (in Berkeley, CA) where you have to write a blog post every day (or you have to leave). I’ve found having to write something good enough that I’m okay posting it every day to be helpful for productivity so far, and I don’t think I would be able to do that without being in an environment where there’s extremely strong peer pressure (both from other participants, and from my blog readers who will know if I fail) to write. I plan on doing some smaller AI safety experiments while I’m here and writing about them.
Future
I have pretty short AI timelines and I think I probably have at most 8 years until basically all of the useful work I can do is automated (my median guess is roughly 3.5 years although I haven’t considered it too much). I’m also pretty concerned about risks from transformative AI, mostly from misalignment and extreme misuse risks, so I think it would be useful for the world to work on addressing harms from transformative AI.
Some ways I’ve considered doing that:
- Working on LLM interpretability
- I think there’s a decent case for LLM interpretability being helpful for safety by helping us understand how models actually work. Also aside from the AI safety aspect, AI interpretability is really interesting; you get to dissect a novel alien mind. (I’m worried interpretability gets too much focus in the AI safety community because it’s super interesting as opposed to the actual benefits from it.)
- I would prefer to do this with other people as an employee at a company that researches AI models, but it might be difficult to get a job doing that and I’m fine doing independent research by myself.
- Other alignment work
- There’s lots of other kinds of interesting AI alignment work I could do. E.g. I find red teaming model safeguards to be very fun and interesting. There are several assorted things I could work on here. I’m also pretty open to doing work that indirectly supports AI alignment.
- AI governance advocacy
- I’m pretty uncertain what AI policies would be good: I’m very uncertain how risky deploying larger AI systems is, and I think it’s possible (although unlikely) that we’re in a world where it would be best if the government took a hands-off approach and let AI labs implement good enough safety measures themselves, ushering in an era of ASI-enabled human flourishing.
- I think it’s good that other people are doing this but I think I would be pretty bad at this, I don’t think I’m very good at diplomatically presenting arguments for things.
- Work on AI capabilities
- This is the riskiest option of these and I think it’s probably not good in expectation. Steelmanning this though: transformative AI is probably going to be built soon anyways, and I would prefer it to be built by a company I like and have my values, so I could increase the odds of transformative AI going well by working on capabilities. I lean against this but I don’t want to dismiss it out of hand.
Where should I live?
If I end up getting a job this is easy; I think it’s pretty unlikely I’ll end up with a full-time remote job (I would prefer to work in person). The cities I’ve considered are:
- London, UK: Stayed there for a month for ARENA; lots of interesting stuff going on especially re AI safety. There’s a visa I can get fairly easily to live there without any sponsorship.
- SF bay area: Great place, but the only way I could really move there is through an employer-sponsored visa.
- Toronto: Lived there for 3 years and a nice place to live. It has some interesting stuff but not as much as London or SF.
- Montreal: Very cheap housing (I think because of a combination of zoning laws and Quebec being primarily French-speaking which suppresses demand), but less interesting stuff going on (there is Mila which does some AI safety stuff).
Fin
I don’t really have any clear plans right now. I’m pretty open to suggestions and advice for what I should do; please feel free to reach out if you have any input.