Robi Rahman🔸

Data Scientist @ MIRI Technical Governance Team
1492 karmaJoined Working (6-15 years)New York, NY, USA
www.robirahman.com

Bio

Participation
9

Data scientist working on AI governance at MIRI, previously forecasting at Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.

Comments
243

Thank you very much, I hadn't seen that the moral parliament calculator had implemented all of those.

Moral Marketplace strikes me as quite dubious in the context of allocating a single person's donations, though I'm not sure it's totally illogical.

Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be very dissatisfied if they don't, e.g. "the universe is ruled by a shrimp deity who will torture you and 10^^10 others for eternity unless you donate all your money to shrimp welfare". You can be 99.9999...% sure this isn't true but never 100% sure, so this gets a seat in your parliament.

  1. I'm definitely not assuming the my-favorite-theory rule.
  2. I agree that what I'm describing is favored by the maximize-expected-choiceworthiness approach, though I think you should reach the same conclusion even if you don't use it.
  3. Can you explain how a moral parliament would end up voting to split the donations? That seems impossible to me in the case where two conflicting views disagree on the best charity - I don't see any moral trade the party with less credence/voting power can offer the larger party not to just override them. For parliaments with 3+ views but no outright majority, are you envisioning a spoiler view threatening to vote for the charity favored by the second-place view unless the plurality view allocates it some donation money in the final outcome?

edit: actually, I think the donations might end up split if you choose the allocation by randomly selecting a representative in the parliament and implementing their vote, in which case the dominant party would offer a little bit of donations in cases where it wins in exchange for donations in cases where someone else is selected?

Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.

Moral uncertainty is completely irrelevant at the level of individual donors.

Can you give examples of "adversarial" altruistic actions? Like protesting against ICE to help immigrants? Getting CEOs fired to improve what their corporations do?

By "greater threat to AI safety" you mean it's a bigger culprit in terms of amount of x-risk caused, right? As opposed to being a threat to AI safety itself, by e.g. trying to get safety researchers removed from the industry/government (like this).

What is positivism and what are some examples of non-positivist forms of knowledge?

IMO, merely 4x-ing the number of individual donors or the frequency of protests isn't near the threshold for "mass social change" in the animal welfare area.

"Individual donors shouldn't diversify their donations"

Arguments in favor:

  • this is the strategy that maximizes the benefit to the recipients

Arguments against:

  • it's personally motivating to stay in touch with many causes
  • when each cause comes up in a conversation with non-EAs, you can mention you've donated to it
Load more