(Post 3/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
(Post 4/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
I’ve spent a bit of time over the last year trying to form better judgement. Dumping some notes here on things I tried or considered trying, for future reference.
I think this framing of the exercise might have been mentioned to me by Michael Aird.
- Find Google Docs where people (whose judgement you respect) have left comments and an overall take on the promisingness of the idea. Hide their comments and form your own take. Compare. (To make this a faster process, pick a doc/idea where you have enough background knowledge to answer without looking up loads of things)
This is a good tip! Hadn't thought of this.
An effective mental health intervention, for me, is listening to a podcast which ideally (1) discusses the thing I'm struggling with and (2) has EA, Rationality or both in the background. I gain both in-the-moment relief, and new hypotheses to test or tools to try.
Esp since it would be scalable, this makes me think that creating an EA mental health podcast would be an intervention worth testing - I wonder if anyone is considering this?
In the meantime, I'm on the look out for good mental health podcasts in general.
This does sound like an interesting idea. And my impression is that many people found the recent mental health related 80k episode very useful (or at least found that it "spoke to them").
Maybe many episodes of Clearer Thinking could also help fill this role?
Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?
Though starting a podcast is pretty low-cost, so it'd be quite reasonable to just try it without doing that sort of research first.
Incidentally, that 80k episode and some from Clearer Thinking are the exact examples I had in mind!
Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?
As a step towards this, and in case any other find it independently useful, here are the episodes of Clearer Thinking that I recall finding helpful for my mental health (along with the issues they helped with).
I've been thinking about starting such an EA mental health podcast for a while now (each episode would feature a guest describing their history with EA and mental health struggles, similar to the 80k episode with Howie).
However, every EA whom I've asked to interview—only ~5 people so far, to be fair—was concerned that such an episode would be net negative for their career (by, e.g., becoming less attractive to future employers or collaborators). I think such concerns are not unreasonable though it seems easy to overestimate them.
Generally, there seems to be a tradeoff between how personal the episode is and how likely the episode is to backfire on the interviewee.
One could mitigate such concerns by making episodes anonymous (and perhaps anonymizing the voice as well). Unfortunately, my sense is that this would make such episodes considerably less valuable.
I'm not sure how to navigate this; perhaps there are solutions I don't see. I also wonder how Howie feels about having done the 80k episode. My guess is that he's happy that he did it; but if he regrets it that would make me even more hesitant to start such a podcast.
I thought about this a bunch before releasing the episode (including considering various levels of anonymity). Not sure that I have much to say that's novel but I'd be happy to chat with you about it if it would help you decide whether to do this.[1]
The short answer is:
[1] Email me if you want to make that happen since the Forum isn't really integrated into my workflow.
(Post 1/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
According to me, these are some of the key uncertainties in AI governance field-building—questions which, if we had better answers to them, might significantly influence decisions about how field-building should be done.
Reasons why this seems important to get clarity on:
Re: “positions with a deadline”: it seems plausible to me that there will be these windows of opportunity when important positions come up, and if you haven’t built the traits you need by that time, it’s too late. E.g. more talent very skilled at public comms would probably have been pretty useful in Q1-2 2023.
Counterpoint: the strongest version of this consideration assumes a kind of “efficient market hypothesis” for people building up their own skills. If people aren’t building up their own skills efficiently, then there could still be significant gains from helping them to do so, even for positions that are currently being hired for. Still, I think this consideration carries some weight.
(Post 6/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Explicit backchaining is one way to do prioritisation. I sometimes forget that there are other useful heuristics, like:
(Post 2/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Ways of framing EA that (extremely anecdotally*) make it seem less ick to newcomers. These are all obvious/boring; I'm mostly recording them here for my own consolidation
These frames can also apply to any specific cause area.
*like, I remember talking to a few people who became more sympathetic when I used these frames.
I like the thinking in some ways, but think there are also some risks. For instance, emphasising EA being diverse in its ways of doing good could make people expect it to be more so than it actually is, which could lead to disappointment. In some ways, it could be good to be upfront with some of the less intuitive aspects of EA.
(Post 5/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Note to self: more detailed but less structured version of these notes here.
(Post 2/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Misc things it seems useful to do/find out