Formerly titled "Write up my research ideas for someone else to tackle? Fine - you asked for it!"
Unrelatedly, thanks to Jessica McCurdy for telling me to write down some of my research ideas and questions in case someone else wants to tackle one (or a few).
The list
- Cause prio but for earning to give
- As far as I know, SBF relied on his personal knowledge and intuition when deciding to try building FTX.
- It doesn’t have to be this way! I can imagine a more systematic effort to identify and describe which earning to give opportunities are most promising. Is there a $100B idea with a 1% chance of working? A $1T idea with a 0.1% chance? I think we can and should find out.
- Are there cheap and easy ways to kill fish quickly?
- Right now, I estimate 250 million fish years are spent in agony each year as wild fish are killed by asphyxiation or being gutted alive, which takes a surprisingly long time to cause death. There must be a better way.
- Related: can we just raise (farm) a ton of fish ourselves, but using humane practices, with donations subsidizing the cost difference relative to standard aquaculture
- Right now, I estimate 250 million fish years are spent in agony each year as wild fish are killed by asphyxiation or being gutted alive, which takes a surprisingly long time to cause death. There must be a better way.
- From my red teaming project on extinction risk reduction:
- Unpacking which particular biorisk prevention activities seem robust to a set of plausible empirical and ethical assumptions and which do not; and
- Seeking to identify any AI alignment research programs that would reduce s-risks by a greater magnitude than "mainstream" x-risk-oriented alignment research.
- From my “half baked ideas comment” on the Forum:
- Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an "EA org"
- This might look like a collective of independent-ish researchers?
- There should be way more all-things-considered, direct comparisons between cause areas.
- So I guess the research question is: what is the most important cause area to work on and/or donate to, all things considered?
- No more “agreeing to disagree” - I want an (intellectual) fight to the death. Liberal-spending longtermists should make an affirmative case that this ethos is the best way to spend money on the margin, and objectors should argue that it isn’t.
- In particular, I don't think a complete case has been made (even from a total utilitarian, longtermist perspective) that at the current funding margin, it makes sense to spend marginal dollars on longtermism-motivated projects instead of animal welfare projects. I'd be very interested to see this comparison in particular
- So I guess the research question is: what is the most important cause area to work on and/or donate to, all things considered?
- Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an "EA org"
- [Related to above] Is anyone actually arguing that neartermist, human-centric interventions are the most ethical way to spend time or money?
- That’s not a rhetorical question! The hundreds of millions of dollars being directed to AMF et al. instead of some other charity or cause area should be more seriously justified or defended, IMO.
- For anyone who does think that improving human welfare in the developing world is the best thing to do: do AMF-type charities actually increase the number of human life-years lived?
- (As I asked on Twitter) What jobs/tasks/roles are high impact (by normal EA standards) but relatively low status within EA?
- I think one of the big ways EA could screw up is by having intra-EA status incongruent (at least ordinally) with expected impact.
- What would an animal welfare movement with the ambition, epistemic quality, and enthusiasm (and maybe funding) of the longtermist movement look like?
- [I might tackle this] What can AI safety learn from human brains’ bilateral asymmetry
- The whole “brain hemisphere difference” thing is surrounded by plenty pop science myths, surrounding it, but there really are some quite profound differences as described in Ian McGilchrist’s The Master and His Emissary
- What positions of power and/or influence in the world are most neglected or easiest to access, perhaps because they’re low prestige and/or low pay?
- S-risk people: what can we actually do, in the real world and the foreseeable future, to decrease s-risks?
- It seems to me most of this research is quite abstract and theoretical - which may not make sense if transformative AI is only a few years away!
- It seems like the default view is that some time in the future, the world and/or EA is going to decide that AI systems are sentient. This seems totally implausible.
- What should we do under radical uncertainty as to whether any given “thing” or process is sentient?
- What empirical observations, if any, should change our actions, plans, or ethics?
I think this is wildly overdetermined in favor of longtermism. For example, I think at the current margins, a well-spent dollar has a ~10^-13 chance of making the future go much better, with a value probably more than 10^50 happy human lives (and with a much greater expected value -- arguably infinite, but that's another conversation). So the marginal longtermist dollar is worth much more than 10^37 happy lives in expectation. (That's way more than the number of fish that have ever lived, but for the sake of having a number I think we can safely upper-bound the direct effect of the marginal animal-welfare dollar at 10^0 happy lives.) Given utilitarianism, even if you nudge my numbers quite a bit, I think longtermism blows animal welfare out of the water.
Of course, I don't think a longtermist dollar is actually ~10^40 times more effective than an animal-welfare one, because of miscellaneous side effects of animal welfare spending on the long-term future. But I think those side effects dominate. (I have heard an EA working on animal welfare say that they think the effects of their work are dominated basically by side effects on humans' attitudes.) And presumably the side effects aren't greater than the benefits of funding longtermist projects.
10^12 might be too low. Making up some numbers: If future civilizations can create 10^50 lives, and we think there's an 0.1% chance that 0.01% of that will be spent on ancestor simulations, then that's 10^43 expected lives in ancestor simulations. If each such simulation uses 10^12 lives worth of compute, that's a 10^31 multiplier on short-term helping.