AI safety org typology:

  1. Academic/nonprofit orgs;
  2. “Alignment-as-a-service” orgs, where product contributes to alignment;
  3. “Alignment-on-the-side” orgs, where product funds alignment research;
  4. Scaling labs, where alignment is driven by product.

I want more 1-2; maybe 3? I also want the right market incentives for 3-4 to accelerate “worst case” alignment.