Non-EA interests include chess and TikTok (@benthamite). Formerly @ CEA, METR + a couple now-acquired startups.
Feedback always appreciated; feel free to email/DM me or use this link if you prefer to be anonymous.
To decompose your question into several sub-questions:
the strength of this tail-wind that has driven much of AI progress since 2020 will halve
I feel confused about this point because I thought the argument you were making implies a non-constant "tailwind." E.g. for the next generation these factors will be 1/2 as important as before, then the one after that 1/4, and so on. Am I wrong?
Interesting ideas! For Guardian Angels, you say "it would probably be at least a major software project" - maybe we are imagining different things, but I feel like I have this already.Â
e.g. I don't need a "heated-email guard plugin" which catches me in the middle of writing a heated email and redirects me because I don't write my own emails anyway. I would just ask an LLM to write the email and 1) it's unlikely that the LLM would say something heated and 2) for the kinds of mistakes that LLMs might make, it's easy enough to put something in the agents.md to ask it to check for these things before finalizing the draft.
(I think software engineering might be ahead of the curve here, where a bunch of tools have explicit guardian angels. E.g. when you tell the LLM "build feature X", what actually happens is that agent 1 writes the code, then agent 2 reviews it for bugs, agent 3 reviews it for security vulns, etc.)
I'm so sorry you had to go through this, Fran. Thank you for writing about it so clearly; this should never have happened.