AI safety org typology
AI safety org typology:
- Academic/nonprofit orgs;
- “Alignment-as-a-service” orgs, where product contributes to alignment;
- “Alignment-on-the-side” orgs, where product funds alignment research;
- Scaling labs, where alignment is driven by product.
I want more 1-2; maybe 3? I also want the right market incentives for 3-4 to accelerate “worst case” alignment.