I want to get a sense for what kinds of things EAs — who don't spend most of their time thinking about AI stuff — find most confusing/uncertain/weird/suspect/etc. about it.
By "AI stuff", I mean anything to do with how AI relates to EA.
For example, this includes:
- What's the best argument for prioritising AI stuff?, and
- How, if at all, should I factor AI stuff into my career plans?
but doesn't include:
- How do neural networks work? (except inasmuch as it's relevant for your understanding of how AI relates to EA).
Example topics: AI alignment/safety, AI governance, AI as cause area, AI progress, the AI alignment/safety/governance communities, ...
I encourage you to have a low bar for writing an answer! Short, off-the-cuff thoughts very welcome.
Here's a couple that came to mind just now.
How smart do you need to be to contribute meaningfully t AI safety? Near top in class in high-school? Near top in class at ivy-league? Potential famous prof at ivy league? Potential fields medalist?
Also, how hard should we expect alignment to be? Are we trying to throw resources at a problem we expect to be able to at least partially solve in most worlds (which is e.g. the superficial impression I get from biorisk) or are we attempting a hail mary, because it might just work and it's important enough to be worth a try (not saying that would be bad)?
Big labs in the West that kind of target AGI are OpenAI and DeepMind. Others target AGI less explicitly, but inlcude e.g. Google Brain. Are there equivalents elsewhere? China? Do we know whether these exits? Am I missing labs that target AGI in the West?
Finally, this one's kind of obvious, but how large is the risk? What's the probability of catastrophe? I'm aware of many estimates, but this is still definitely something I'm confused about.
I think on all these questions except (3), there's substantial disagreement among AI safety researchers, though I don't have a good feeling for the distributions of views either.