Dentist doing earning to give. I have pledged to donate a minimum of 50 % (aiming for 60%) or 40.000-50.000 $ annually (at the beginning of my career). While I do expect to mainly do "giving now", I do plan, in periods of limited effective donating opportunities, to do "investing to give". Â
As a longtermist and total utilitarian, finding the cause that increases utilities (no matter the time or type of sentient being) the most time/cost-effectively is my goal. In the pursuit of this goal, I so far care mostly about: alignment, s-risk, artificial sentience and WAW (wild animal welfare) (but feel free to change my mind). Â
I heard about EA for the first time in 2018 in relation to an animal rights organization I worked for part-time (Animal Alliance in Denmark). However I have only had minimal interactions with EAs.
Male, 25 years old and diagnosed with aspergers (autism) and dyslexia.Â
Thanks a lot for the post! Just wanted to say that your posts (especially the one you made last year) have inspired a large part of my donations.Â
Of the almost $30k I donated over the last year, 70% went to AI policy orgs (mainly Palisade Research). I'm not sure where I would have donated without your posts, but I can't say with certainty that they would have ended up the same place. Â Â
First off, I must say - I really like that answer.Â
I guess I'm concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agree - I shouldn't giv it a 90% likelihood.Â
I have a question I would like some thoughts on:
As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:
Â
My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering.Â
Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good.Â
 So now I’m asking, what am I getting wrong? Why is the future likely to be net positive?Â
Thank you for the post! Can't belive I only saw it now.Â
I do agree that altruism can and should be seen as something that's net positive to once own happiness for most. But:
1. My post was mainly intended for people who are already "hardcore" EAs and are willing to make a significant personal sacrifice for the greater good.Â
2. You make some interesting comparisons to religion that I somewhat agree with. Though I don't think religion is as time consuming as EA is for many EAs. I'm also sure EA would seem less like a personal sacrifice if you were surrounded by EAs.Â
3. Trying to make EA more mainstream is not simple. Many ideas seems ratical to the average person. You could ofc try to make the ideas seem more in line with the average viewpoint. But I don't think that's worth it if it makes us less efficient.Â
I have a question I would like some thoughts on:
As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:
Â
My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering.Â
Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good.Â
 So now I’m asking, what am I getting wrong? Why is the future likely to be net positive?Â
Let me know if you need funding. May be interested in donating $100-$1.000 monthly.Â