Current Work
I co-lead Machine Alignment, Transparency, and Security (MATS) Research, a research nonprofit that aims to solve the talent bottleneck in AI safety & security. We run fellowship programs that have supported 530+ researchers and output 180+ research publications, with offices in Berkeley and London. Our alumni have founded 30+ AI safety & security organizations and teams, including Apollo Research, Timaeus, Theorem, and Workshop Labs (recently acquired by Thinking Machines).
I co-founded the London Initiative for Safe AI (LISA), the largest AI safety hub in Europe, and advise Halcyon Futures, Catalyze Impact, Pivotal Research, and more. As a Manifund Regrantor, I have awarded $400,000 to AI safety start-ups and projects and helped seed Catalyze Impact, Timaeus, AI Safety Atlas, and many more.
Mission
I believe the development of transformative AI is one of the most important events in human history, and that we have a narrow window to get the foundations right. My focus is on scaling AI safety through building the organizations, talent pipelines, and infrastructure needed to ensure advanced AI systems are developed safely and for the benefit of all.
Board & Advisory Positions
- Board Member — London Initiative for Safe AI (LISA)
- Board Member — Catalyze Impact
- Regrantor — Manifund
- Advisor — AI Safety ANZ
- Advisor — Pivotal Research
- Advisor — Halcyon Futures
- Advisor — Black in AI Safety and Ethics (BASE)
- Advisor — Alignment Foundation
Background
Before MATS, I completed a PhD in physics at the University of Queensland, studying signatures of quantum chaos in ultracold atomic systems. I grew up on the Gold Coast, Australia and was a member of the Scout Movement for 20 years. I enjoy science fiction, latin dancing, and powerlifting.