Proposal: For critical EA-aligned jobs (e.g., AI safety research, policy advocacy, biosecurity), every position should have:

  1. A Primary Worker – Executing the role.
  2. An Automation Team – Dedicated to making the role obsolete via tools/AI.

Why This Could Work for EA:

  • Focus on Scalability – Automating high-impact roles frees up talent for harder problems.
  • Prioritization – Start with bottleneck areas (e.g., grantmaking, literature reviews).
  • Resilience – If the primary worker leaves, automation fills gaps faster.

Challenges & Open Questions:

  • Incentives – Would workers resist automating their own jobs? Could impact-weighted salaries help?
  • Feasibility – Which EA orgs could pilot this (e.g., ops-heavy charities like GiveWell)?
  • Risks – Premature automation might reduce quality (e.g., in research).

-1

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities