Hide table of contents

Formerly titled "Write up my research ideas for someone else to tackle? Fine - you asked for it!"

Unrelatedly, thanks to Jessica McCurdy for telling me to write down some of my research ideas and questions in case someone else wants to tackle one (or a few).

The list

  1. Cause prio but for earning to give
    1. As far as I know, SBF relied on his personal knowledge and intuition when deciding to try building FTX.
    2. It doesn’t have to be this way! I can imagine a more systematic effort to identify and describe which earning to give opportunities are most promising. Is there a $100B idea with a 1% chance of working? A $1T idea with a 0.1% chance? I think we can and should find out.
  2. Are there cheap and easy ways to kill fish quickly?
    1. Right now, I estimate 250 million fish years are spent in agony each year as wild fish are killed by asphyxiation or being gutted alive, which takes a surprisingly long time to cause death. There must be a better way.
      1. Related: can we just raise (farm) a ton of fish ourselves, but using humane practices, with donations subsidizing the cost difference relative to standard aquaculture
  3. From my red teaming project on extinction risk reduction:
    1. Unpacking which particular biorisk prevention activities seem robust to a set of plausible empirical and ethical assumptions and which do not; and
    2. Seeking to identify any AI alignment research programs that would reduce s-risks by a greater magnitude than "mainstream" x-risk-oriented alignment research.
  4. From my “half baked ideas comment” on the Forum:
    1. Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an "EA org" 
      1. This might look like a collective of independent-ish researchers?
    2. There should be way more all-things-considered, direct comparisons between cause areas.
      1. So I guess the research question is: what is the most important cause area to work on and/or donate to, all things considered? 
        1. No more “agreeing to disagree” - I want an (intellectual) fight to the death. Liberal-spending longtermists should make an affirmative case that this ethos is the best way to spend money on the margin, and objectors should argue that it isn’t.
        2. In particular, I don't think a complete case has been made (even from a total utilitarian, longtermist perspective) that at the current funding margin, it makes sense to spend marginal dollars on longtermism-motivated projects instead of animal welfare projects. I'd be very interested to see this comparison in particular
  5. [Related to above] Is anyone actually arguing that neartermist, human-centric interventions are the most ethical way to spend time or money? 
    1. That’s not a rhetorical question! The hundreds of millions of dollars being directed to AMF et al. instead of some other charity or cause area should be more seriously justified or defended, IMO.
    2. For anyone who does think that improving human welfare in the developing world is the best thing to do: do AMF-type charities actually increase the number of human life-years lived?
  6. (As I asked on Twitter) What jobs/tasks/roles are high impact (by normal EA standards) but relatively low status within EA?
    1. I think one of the big ways EA could screw up is by having intra-EA status incongruent (at least ordinally) with expected impact.
  7. What would an animal welfare movement with the ambition, epistemic quality, and enthusiasm (and maybe funding) of the longtermist movement look like? 
  8. [I might tackle this] What can AI safety learn from human brains’ bilateral asymmetry
    1. The whole “brain hemisphere difference” thing is surrounded by plenty pop science myths, surrounding it, but there really are some quite profound differences as described in Ian McGilchrist’s The Master and His Emissary
  9. What positions of power and/or influence in the world are most neglected or easiest to access, perhaps because they’re low prestige and/or low pay?
  10. S-risk people: what can we actually do, in the real world and the foreseeable future, to decrease s-risks? 
    1. It seems to me most of this research is quite abstract and theoretical - which may not make sense if transformative AI is only a few years away!
  11. It seems like the default view is that some time in the future, the world and/or EA is going to decide that AI systems are sentient. This seems totally implausible.
    1. What should we do under radical uncertainty as to whether any given “thing” or process is sentient?
    2. What empirical observations, if any, should change our actions, plans, or ethics?
       
Comments23


Sorted by Click to highlight new comments since:

Hey, thanks for writing this, there are some interesting ideas here. A bit of a nitpick, but I’m not sure that your “estimate 250 million fish years are spent in agony each year as wild fish are killed by asphyxiation or being gutted alive” is quite accurate . You are extrapolating from the length of time it takes for herring, cod, whiting, sole, dab and plaice to suffocate to all wild-caught fish. But I think that all of these are rather big fish and they likely were studied and mentioned by FishCount because it takes so long for them to suffocate. For example, 17%–65% of all wild-caught fishes are anchovies (295–908 billion fishes per year), and this video claims that “anchovies die immediately when they are out of water.” (though I don’t know how reliable that video is). I tried to estimate the same things (after reading the same text) here. I estimated that 0.7–49 million herring, cod, whiting, sole, dab, and plaice are suffocating in the air after being landed at any time (and didn’t make an estimate for other fishes). Also, there’s already some research on humane slaughter of fish, some of it is funded by Open Philanthropy, I don’t know if it is neglected or not.

Thanks for the correction, that is definitely good news for the fish (albeit slightly bad news for my research judgement lol)!

Although another consideration pointing in the other direction is that larger fish probably have larger brains with more neurons, which may render them more morally relevant

Re. 5: I think more people are in this camp than you realise, maybe they're just not well-represented on the forum and twitter. I'm a global development nowist because:

  1. Affecting the future predictably is hard and most longtermist projects I've seen aren't obviously going to have a positive impact. They might even be negative (eg. stalling AI might be bad).

  2. Beyond freeing pigs and chooks from cages, animal welfare concern leads to absurd conclusions when you start thinking about wild animal suffering or try to quantify insect suffering.

  3. Global development probably has positive long-term effects on human wellbeing (development begets development and accelerates technology by allowing more people able to take part in innovation)

  4. Global development will probably have positive effects on animal welfare anyway and may even be necessary in a lot of cases (richer countries are generally the ones that adopt better animal welfare rules)

  5. Global development is more broadly appealing than than fish rights and AI safety. Focus on longtermism and fringe animal welfare issues is part of what makes the EA label and community alienating to people.

Thanks for representing the global dev camp! 

2. Beyond freeing pigs and chooks from cages, animal welfare concern leads to absurd conclusions when you start thinking about wild animal suffering or try to quantify insect suffering.

Eh, I agree the conclusions might be counterintuitive and even weird, but disagree pretty strongly that they're absurd. 

Even granting that only freeing mammals from cages is good and worthy, I'm (subjectively, not super rigorously) quite confident that indeed getting chickens and/or pigs out of cages is both more robustly good and ethically more important than any of the GiveWell charities.  

4. Global development will probably have positive effects on animal welfare anyway and may even be necessary in a lot of cases (richer countries are generally the ones that adopt better animal welfare rules)

Not impossible, but seems very unlikely and would be suspicious if helping humans happened to also be the best way to help animals. I don't think it comes very close, in fact, though I'm unsure what the sign is

5. Global development is more broadly appealing than than fish rights and AI safety. Focus on longtermism and fringe animal welfare issues is part of what makes the EA label and community alienating to people.

I agree with the first sentence, which is why I suspect that most of the ethical value from global dev runs through community building/attracting newcomers and optics, and this effect is plausibly pretty big in magnitude. But I think we should have a very high bar for not doing something morally important because some people might think it's weird or silly, even if some amount of activity optimized for broad appeal is warranted

Re your point 4-1: I wrote a relevant post some number of months ago and never really got a great answer: https://forum.effectivealtruism.org/posts/HZacQkvLLeLKT3a6j/how-might-a-herd-of-interns-help-with-ai-or-biosecurity

And now, here I am going into what may be my ~6th trimester of “not having an existential risk reduction (or relevant) job or internship despite wanting to get one”… 🙃

This seems like a major failure, and FWIW I think you should currently be getting paid to do some sort of longtermism-relevant research (or other knowledge work), and it's a failure that you're not. 🙃 indeed lol

Though I should register that I know Harrison IRL so infer whatever biases you should

I don't think a complete case has been made (even from a total utilitarian, longtermist perspective) that at the current funding margin, it makes sense to spend marginal dollars on longtermism-motivated projects instead of animal welfare projects. I'd be very interested to see this comparison in particular

I think this is wildly overdetermined in favor of longtermism. For example, I think at the current margins, a well-spent dollar has a ~10^-13 chance of making the future go much better, with a value probably more than 10^50 happy human lives (and with a much greater expected value -- arguably infinite, but that's another conversation). So the marginal longtermist dollar is worth much more than 10^37 happy lives in expectation. (That's way more than the number of fish that have ever lived, but for the sake of having a number I think we can safely upper-bound the direct effect of the marginal animal-welfare dollar at 10^0 happy lives.) Given utilitarianism, even if you nudge my numbers quite a bit, I think longtermism blows animal welfare out of the water.

Of course, I don't think a longtermist dollar is actually ~10^40 times more effective than an animal-welfare one, because of miscellaneous side effects of animal welfare spending on the long-term future. But I think those side effects dominate. (I have heard an EA working on animal welfare say that they think the effects of their work are dominated basically by side effects on humans' attitudes.) And presumably the side effects aren't greater than the benefits of funding longtermist projects.

I tend to think you’re right, but don’t think it’s wildly overdetermined - mostly because animal suffering reduction seems more robustly good than does preventing extinction (which I realize is not the sole or explicit goal of longtermism, but is sometimes an intermediate goal)

You can also compare s-risk reduction work with animal welfare.

You asked for an analysis "even from a total utilitarian, longtermist perspective." From that perspective, I claim that preventing extinction clearly has astronomical (positive) expected value, since variance between possible futures is dominated by what the cosmic endowment is optimized for, and optimizing for utility is much more likely than optimizing for disutility. If you disagree, I'd be interested to hear why, here or on a call.

A proper treatment of this should take into account that short-term helping also might have positive effects in lots of simulations to a much greater extent than long-term helping. https://longtermrisk.org/how-the-simulation-argument-dampens-future-fanaticism

Sure, want to change the numbers by a factor of, say, 10^12 to account for simulation? The long-term effects still dominate. (Maybe taking actions to influence our simulators is more effective than trying to cause improvements in the long-term of our universe, but that isn't an argument for doing naive short-term interventions.)

10^12 might be too low. Making up some numbers: If future civilizations can create 10^50 lives, and we think there's an 0.1% chance that 0.01% of that will be spent on ancestor simulations, then that's 10^43 expected lives in ancestor simulations. If each such simulation uses 10^12 lives worth of compute, that's a 10^31 multiplier on short-term helping.

On fish, there were several comments here, including this one from me.

The 2018 Humane Slaughter Association report was probably the best info available at the time; not sure what's happened since. 

Wow thanks so much, super valuable info! Too bad I can't give it more than four karma haha

There is a lot of potential in fish welfare/stunning. In addition to what others have mentioned, IIRC from some reading a few years ago:

  • The greatest bottleneck in humane slaughter is research, e.g. determining parameters/designing machines for stunning each major species, as they differ so much. There just aren't many experts in this field, and the leading researchers are mostly very busy (and pretty old), but perhaps financial incentives would persuade some people with the right sort of background to go into this area.
  • As well as electrical and percussive stunning, anaesthetising with clove oil/eugenol seems a promising and under-researched method of reducing the pain of slaughter.  Because it may just involve adding a liquid/powder to a tank containing the fish, it may also require less tailoring to each species than than other methods (though it can affect the flavour if "too much" is used). I have some notes on this if anyone is interested.
  • Crustastun could be mass-produced and supplied cheaply/freely to places that would otherwise boil crustaceans alive. I seem to recall a French lawyer had invented another machine that was even better (or cheaper) but was too busy to promote it; maybe EAs could buy the patent or something?

One of the reasons it took so long for me to reply is that I kinda fell into a rabbit hole investigating whether buying the Crustastun patent+manufacturing it and giving away would be a good intervention. It all looked good until I finally thought to look into lobsters themselves, and it turns out that they have way fewer neurons - ~100,000 according to an OpenPhil report (lost the link) - which is 2 orders of magnitude lower than even very small fish and  as many as humans. And crabs are very similar. F

WIW, I was not at all expecting to find this, and had no idea crustaceans had extremely disproportionately small brains. May as well link this Google doc as what I had written before I met some inconvient statistics.

I know not everyone is convinced that linear neuron comparisons are ideal, but they intuitively seem unlikely to be too far off from what "matters". Given this, I'm gonna conclude that Crustastun isn't worth pursuing unless we get more, different info about lobster sentience.

On to the other bullet points!

Glad you found it useful. I am not qualified to comment on the role of neuron count in sentience; you may want to look at work by Jason Schukraft and others at Rethink Priorities on animal sentience and/or get in touch with them.

If you haven't already, you may also want to review the 2018 Humane Slaughter Association report, which was the best I could find in early 2019. While looking for it, I also just came across one from Compassion in World Farming, which I don't think I've read.

Some of these are good enough questions that I am just raising an eyebrow, nodding, and hoping someone writes them up.

A few miscellaneous thoughts on the rest, which seem more tractable:

Are there cheap and easy ways to kill fish quickly?

Maybe you're already aware of ikejime and have concluded that it can't be cheaply scaled, but in case you haven't, check it out.

Figure out how to put to good use some greater proportion of the approximately 1 Billion recent college grads who want to work at an "EA org" 

This might look like a collective of independent-ish researchers?

Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.

For anyone who does think that improving human welfare in the developing world is the best thing to do: do AMF-type charities actually increase the number of human life-years lived?

Is this different from GiveWell because GiveWell doesn't try to estimate, like, the nth-order effects of AMF? I think I'm convinced by the cluelessness explanation that those would cancel out in expectation so we should be fine with first and maybe second-order effects.

(As I asked on Twitter) What jobs/tasks/roles are high impact (by normal EA standards) but relatively low status within EA?

I think one of the big ways EA could screw up is by having intra-EA status incongruent (at least ordinally) with expected impact.

(As I responded on Twitter and hope to turn into a forum post) I think aligning intra-EA status with impact is basically the whole point of EA community-building, so this is very important. I would guess that organizational operations is still too low-status and neglected: we need more people who are willing to set up payroll. (Low confidence, willing to be talked out of this, but it seems like the case to me.)

What positions of power and/or influence in the world are most neglected or easiest to access, perhaps because they’re low prestige and/or low pay?

An early and low-confidence guess: political careers that begin outside the NEC or California.

Agree that this sounds promising. I think this could be an org that collected well-scoped, well-defined research questions that would be useful for important decisions and then provided enough mentorship and supervision to get the work done in a competent way; I might be trying to do this this year, starting at a small scale. E.g., there are tons of tricky questions in AI governance that I suspect could be broken down into lots of difficult but slightly simpler research questions. DM me for a partial list.

 

You may be able to draw lessons from management consulting firms. One big idea behind these firms is that bright 20-somethings can make big contributions to projects in subject areas they don't have much experience in as long as they are put on teams with the right structure.

Projects at these firms are typically led by a partner and engagement manager who are fairly familiar with the subject area at hand. Actual execution and research is mostly done by lower level consultants, who typically have little background  in the relevant subject area. 

Some high-level points on how these teams work:

  • The team leads formulate a structure for what specific tasks need to be done to make progress on the project
  • There is a lot of hand-holding and specific direction of lower-level consultants, at least until they prove they can do more substantial tasks on their own
  • There are regular check-ins and regular deliverables to ensure people are on the right track and to switch course if necessary

Good points, thanks!

Maybe you're already aware of ikejime and have concluded that it can't be cheaply scaled, but in case you haven't, check it out.

Yeah, I consider that the best case slaughter method and regret that it seems so labor intensive, but seems like there might be other  less bad methods than the current status quo

Is this different from GiveWell because GiveWell doesn't try to estimate, like, the nth-order effects of AMF? I think I'm convinced by the cluelessness explanation that those would cancel out in expectation so we should be fine with first and maybe second-order effects.

Sure, I think 4th+ order effects are likely impossible to model, but 2nd and maybe even 3rd not so much. I'd bet (though far from certain) you could get a well-identified study for the causal effect of e.g. malaria nets on total life years lived/population/pop growth in a certain geographic region, at least for some period of time

(As I responded on Twitter and hope to turn into a forum post) I think aligning intra-EA status with impact is basically the whole point of EA community-building, so this is very important. I would guess that organizational operations is still too low-status and neglected: we need more people who are willing to set up payroll. (Low confidence, willing to be talked out of this, but it seems like the case to me.)

Strong +1 on this, would be a super interesting and productive post IMO!

 

[comment deleted]1
0
0

Is there a $100B idea with a 1% chance of working?

Coming from the startup world: it's pretty unlikely you will find great startups by thinking from this angle. Why? First, entrepreneurship appears to work much better when you don't over-index on the "what if it works?" storyline too early, as it causes people to dig a hole that's "broad and shallow" (which causes your feedback loops to suck, which causes you to fail to make progress, get demotivated and quit) . Second, a ton of other people are trying to find ideas with similar chances of success (competitors only matter early on in a huge market, but an idea of this value must be in a huge market).

[comment deleted]1
0
0
Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
calebp
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
Habeeb Abdul
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would