Hi Lizka, thanks for the article!
I'm curious if you have an opinion about the updated OMB memos that have been released about AI adoption by the US government (M-25-21, M-25-22). Do you think they're a step in the right direction?
My initial thoughts are:
but I've thought about this way less than you so I'm interested in what you think.
Thanks for the write-up :) I agree with the other commenters. In particular, I'm inclined to think it's too early for the suggested actions, especially the proposed public health initiative. I think it would be interesting to see more research, but a public health initiative is premature.
You say:
One critique of mewing is that there is not enough evidence. Our simple retort to that is that mewing is at the early adopters phase of idea propagation and there hasn’t been large scale randomised studies conducted yet. Weston Price did his ethnographic research in the 1920s and drew strong conclusions about the impact of diet on health. There's a huge opportunity for more studies to be done on mewing to level it up on the Hierarchy of evidence but more work is needed.
This makes sense, but by doing more studies on mewing, we may move mewing up the hierarchy of evidence, yet in the process find that the effect is not as large as we might have hoped. Or maybe there are other factors that come into play such as low adherence (as Julia suggested). That is, being in the early adopters phase / being low on the hierarchy of evidence isn't a reason to act with insufficient evidence.
I think there's progress to be made here, but it's at the stage of information-gathering, not a public health initiative (yet).
Sorry for the late comment. I've recently been listening to, and enjoying, The End of the World with Josh Clark. It seems like a really solid and approachable introduction to existential risks. It starts by covering why x-risks might be things that we should be concerned about, and then talks about AI, biosecurity and other possible threats. Includes interviews with Nick Bostrom, Toby Ord, Anders Sandberg, Robin Hanson and others :)
Thanks James, interesting post!
A minor question: where you say the following,
do you think human researchers' access to compute and other productivity enhancements would have a significant impact on their research capacity? It's not obvious to me how bottlenecked human researchers are by these factors, whereas they seem much more critical to "AI researchers".
More generally, are there things you would like to see the EA community do differently, if it placed more weight on longer AI timelines? It seems to me that even if we think short timelines are a bit likely, we should probably put quite a lot of resources towards things that can have an impact in the short term.