D

D0TheMath

1115 karmaJoined College Park, MD 20742, USA
Interests:
Forecasting

Bio

An undergrad at University of Maryland, College Park. Majoring in math.

After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!

I’m currently starting the EA group for the university of maryland, college park.

Also see my LessWrong profile

Sequences
1

Effective Altruism Forum Podcast

Comments
175

I will note that my comment made no reference to who is “more altruistic”. I don’t know what that term means personally, and I’d rather not get into a semantics argument.

If you give the definition you have in mind, then we can argue over whether its smart to advocate that someone ought to be more altruistic in various situations, and whether it gets at intuitive notions of credit assignment.

I will also note that given the situation, its not clear to me Anna’s proper counterfactual here isn’t making $1M and getting nice marketable skills, since she and Belinda are twins, and so have the same work capacity & aptitudes.

To be clear, I think it’s great that people like Belinda exist, and they should be welcomed and celebrated in the community. But I don’t think the particular mindset of “well I have really sacrificed a lot because if I was purely selfish I could have made a lot more money” is one that we ought to recognize as particularly good or healthy.

I think this is the crux personally. This seems very healthy to me, in particular because it creates strong boundaries between the relevant person and EA. Note that burnout & overwork is not uncommon in EA circles! EAs are not healthy, and (imo) already give too much of themselves!

Why do you think its unhealthy? This seems to imply negative effects on the person reasoning in the relevant way, which seems pretty unlikely to me.

I think the right stance here is a question of “should EA be praising such people or get annoyed they’re not giving up more if it wants to keep a sufficient filter for who it calls true believers”, and the answer there is obviously both groups are great & true believers and it seems dumb to get annoyed at either.

The 10% number was notably chosen for these practical reasons (there is nothing magic about that number), and to back-justify that decision with bad moral philosophy about “discharge of moral duty” is absurd.

Its relatively common (I don't know about rates) for such people to take pay-cuts rather than directly donate that percentage. I know some who could be making millions a year who are actually making hundreds. It makes sense they don't feel the need to donate anything additional on top of that!

There's already been much critique of your argument here, but I will just say that by the "level of influence" metric, Daniela shoots it out of the park compared to Donald Trump. I think it is entirely uncontroversial and perhaps an understatement to claim the world as a whole and EA in particular has a right to know & discuss pretty much every fact about the personal, professional, social, and philosophical lives of the group of people who, by their own admission, are literally creating God. And are likely to be elevated to a permanent place of power & control over the universe for all of eternity.

Such a position should not be a pleasurable job with no repercussions on the level of privacy or degree of public scrutiny on your personal life. If you are among this group, and this level of scrutiny disturbs you, perhaps you shouldn't be trying to "reshape the lightcone without public consent" or knowledge.

When you start talking about silicon valley in particular, you start getting confounders like AI, which has a high chance of killing everyone. But if we condition on that going well or assume the relevant people won't be working on that, then yes that does seem like a useful activity, though note that silicon valley activities are not very neglected, and you can certainly do better than them by pushing EA money (not necessarily people[1]) into the research areas which are more prone to market failures or are otherwise too "weird" for others to believe in.

On the former, vaccine development & distribution or gene drives are obvious ones which comes to mind. Both of which have a commons problem. For the latter, intelligence enhancement.


  1. Why not people? I think EA has a very bad track record of extreme group think, caused by a severe lack of intellectual diversity & humility. This is obviously not very good when you're trying to increase the productivity of a field or research endeavor. ↩︎

This seems pretty unlikely to me tbh, people are just less productive in the developing world than the developed world, and its much easier to do stuff--including do good--when you have functioning institutions, surrounded by competent people, connections & support structures, etc etc.

That's not to say sending people to the developed world is bad. Note that you can get lots of the benefits of living in a developed country by simply having the right to live in a developed country, or having your support structure or legal system or credentials based in a developed country.

Of course, its much easier to just allow everyone in a developing country to just move to a developed country, but assuming the hyper rationalist bot exists with an open boarders constraint, it seems incredibly obvious to me that what you say would not happen.

I think it seems pretty evil & infantilizing to force people to stay in their home country because you think they’ll do more good there. The most you should do is argue they’ll do more good in their home country than a western country, then leave it up to them to decide.

I will furthermore claim that if you find yourself disagreeing, you should live in the lowest quality of living country you can find, since clearly that is the best place to work in your own view.

Maybe I have more faith in the market here than you do, but I do think that technical & scientific & economic advancement do in fact have a tendency to not only make everywhere better, but permanently so. Even if the spread is slower than we’d like. By forcing the very capable to stay in their home country we ultimately deprive the world and the future from the great additions they may make given much better & healthier working conditions.

This is not the central threat, but if you did want a mechanism, I recommend looking into the krebs cycle.

I do think this is correct to an extent, but also that much moral progress has been made by reflecting on our moral inconsistencies, and smoothing them out. I at least value fairness, which is a complicated concept, but also is actively repulsed by the idea that those closer to me should weigh more in society's moral calculations. Other values I have, like family, convenience, selfish hedonism, friendship, etc are at odds with this fairness value in many circumstances.

But I think its still useful to connect the drowning child argument with the parts of me which resonate with it, and think about actually how much I care about those parts of me over other parts in such circumstances.

Human morality is complicated, and I would prefer more people 'round these parts do moral reflection by doing & feeling rather than thinking, but I don't think there's no place for argument in moral reflection.

Even if most aren't receptive to the argument, the argument may still be correct. In which case its still valuable to argue for and write about.

Load more