Current Work
I co-lead MATS Research, a research nonprofit that aims to solve the talent bottleneck in AI alignment, transparency, and security. We run fellowship programs that have supported 530+ researchers and output 160+ research publications, with offices in Berkeley and London. Our alumni have founded 30+ AI safety & security organizations and teams, including Apollo Research, Timaeus, and Atla.
I co-founded the London Initiative for Safe AI (LISA), the largest AI safety hub in Europe, and advise Halcyon Futures, Catalyze Impact, Pivotal Research, AI Safety ANZ, Black in AI Safety and Ethics (BASE), and more. As a Manifund Regrantor, I have awarded $400,000 to AI safety start-ups and projects and helped seed Catalyze Impact, Timaeus, AI Safety Atlas, Sydney AI Safety Space, and many more.
My focus is on scaling AI safety through building the organizations, talent pipelines, and infrastructure needed to ensure advanced AI systems are developed safely and for the benefit of all.
Mission
I believe the development of transformative AI is one of the most important events in human history, and that we have a narrow window to get the foundations right. Transformative AI, which can perform most human tasks and thus (in theory) ~10x world economic growth rates, will likely be realized within the next 2-15 years.
It seems extremely difficult to guarantee that powerful and by-default-uninterpretable AI systems will act in ways that are safe or aligned with the best interests of humans and their institutions. There are enormous incentives to deploy economically and militarily advantageous AI systems, and I think current international governance practices might fail to curtail the deployment of unsafe technology. In short, I believe humanity's power may soon dangerously outstrip its wisdom.
I am interested in solving problems related to "value-loading" and "corrigibility," building institutions for AI alignment/governance research, forecasting transformative AI, and developing robust governance norms for "black-ball" technology. I support Effective Altruism, a movement for solving the world's most impactful, neglected, and tractable problems.
Board & Advisory Positions
- Board Member — London Initiative for Safe AI (LISA)
- Regrantor — Manifund
- Advisor — Catalyze Impact
- Advisor — AI Safety ANZ
- Advisor — Pivotal Research
- Advisor — Halcyon Futures
- Advisor — Black in AI Safety and Ethics (BASE)
- Advisor — Alignment Foundation
Background
Before MATS, I completed a PhD in physics at the University of Queensland. </div> </div>