V

vanessa16

32 karmaJoined

Posts
1

Sorted by New

Comments
4

Thanks James, interesting post! 

A minor question: where you say the following,

MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.

do you think human researchers' access to compute and other productivity enhancements would have a significant impact on their research capacity? It's not obvious to me how bottlenecked human researchers are by these factors, whereas they seem much more critical to "AI researchers".

More generally, are there things you would like to see the EA community do differently, if it placed more weight on longer AI timelines? It seems to me that even if we think short timelines are a bit likely, we should probably put quite a lot of resources towards things that can have an impact in the short term.
 

Hi Lizka, thanks for the article!

I'm curious if you have an opinion about the updated OMB memos that have been released about AI adoption by the US government (M-25-21, M-25-22). Do you think they're a step in the right direction?

My initial thoughts are:

  • The "Chief AI Officers" they're requiring be designated at each agency will potentially be quite impactful - they might have quite a lot of say over whether the suggestions you've made get implemented. It seems important to have people with good judgement in these positions, particularly at powerful agencies
  • There seems to me to be an emphasis on reducing bureaucratic barriers to faster adoption of AI (I'm inclined to think it's mostly a political talking point though)

but I've thought about this way less than you so I'm interested in what you think.

Thanks for the write-up :) I agree with the other commenters. In particular, I'm inclined to think it's too early for the suggested actions, especially the proposed public health initiative. I think it would be interesting to see more research, but a public health initiative is premature.

You say:

One critique of mewing is that there is not enough evidence.  Our simple retort to that is that mewing is at the early adopters phase of idea propagation and there hasn’t been large scale randomised studies conducted yet. Weston Price did his ethnographic research in the 1920s and drew strong conclusions about the impact of diet on health.  There's a huge opportunity for more studies to be done on mewing to level it up on the Hierarchy of evidence but more work is needed.

This makes sense, but by doing more studies on mewing, we may move mewing up the hierarchy of evidence, yet in the process find that the effect is not as large as we might have hoped. Or maybe there are other factors that come into play such as low adherence (as Julia suggested). That is, being in the early adopters phase / being low on the hierarchy of evidence isn't a reason to act with insufficient evidence.

I think there's progress to be made here, but it's at the stage of information-gathering, not a public health initiative (yet).

Sorry for the late comment. I've recently been listening to, and enjoying, The End of the World with Josh Clark. It seems like a really solid and approachable introduction to existential risks. It starts by covering why x-risks might be things that we should be concerned about, and then talks about AI, biosecurity and other possible threats. Includes interviews with Nick Bostrom, Toby Ord, Anders Sandberg, Robin Hanson and others :)