Jens Aslaug 🔸

Dentist
110 karmaJoined Working (0-5 years)Danmark

Bio

Dentist doing earning to give. I have pledged to donate a minimum of 50 % (aiming for 60%) or 40.000-50.000 $ annually (at the beginning of my career). While I do expect to mainly do "giving now", I do plan, in periods of limited effective donating opportunities, to do "investing to give".  

As a longtermist and total utilitarian, finding the cause that increases utilities (no matter the time or type of sentient being) the most time/cost-effectively is my goal. In the pursuit of this goal, I so far care mostly about: alignment, s-risk, artificial sentience and WAW (wild animal welfare) (but feel free to change my mind).  

I heard about EA for the first time in 2018 in relation to an animal rights organization I worked for part-time (Animal Alliance in Denmark). However I have only had minimal interactions with EAs.

Male, 25 years old and diagnosed with aspergers (autism) and dyslexia. 

Comments
20

I have a question I would like some thoughts on:

As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.

I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:

  1. ~5% likelihood: ~101000  (Very good future e.g. hedonium shockwave)
  2. ~90% likelihood: -1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
  3. ~5% likelihood: ~-10100 (s-risk like scenarios)

 

My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering. 

Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good. 

 So now I’m asking, what am I getting wrong? Why is the future likely to be net positive? 

Thank you for the post! Can't belive I only saw it now. 

I do agree that altruism can and should be seen as something that's net positive to once own happiness for most. But:
1. My post was mainly intended for people who are already "hardcore" EAs and are willing to make a significant personal sacrifice for the greater good. 
2. You make some interesting comparisons to religion that I somewhat agree with. Though I don't think religion is as time consuming as EA is for many EAs. I'm also sure EA would seem less like a personal sacrifice if you were surrounded by EAs. 
3. Trying to make EA more mainstream is not simple. Many ideas seems ratical to the average person. You could ofc try to make the ideas seem more in line with the average viewpoint. But I don't think that's worth it if it makes us less efficient. 

I have a question I would like some thoughts on:

As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.

I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:

  1. ~5% likelihood: ~101000  (Very good future e.g. hedonium shockwave)
  2. ~90% likelihood: -1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
  3. ~5% likelihood: ~-10100 (s-risk like scenarios)

 

My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering. 

Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good. 

 So now I’m asking, what am I getting wrong? Why is the future likely to be net positive? 

Well I totally agree with you, and the simple answer is - I don't. All of the graph/tables I made (except fig. 5 - which was only a completely made up example) are based on averages (including made up numbers that's supposed to represent the average). They don't take into account personal factors - like your income or how expensive rent+utilities are in your area. Therefore the models should only be used as a very rough guide, that can't work by itself (I guess one could make a more complex model that includes these factors). Therefore one should also make a budget, to see how it would look in real life, as I suggested in this section

X-risk reduction (especially alinement) is highly neglected and it's less clear how our actions can impact the value of the future. However, I think the impact of both is very uncertain and I still think working on s-risk reduction and long-termest animal work is of high impact. 

I agree. :) Your idea of lobbying and industry-specific actions might also be more neglected. In terms of WAW, I think it could help reduce the amount of human caused suffering to wild animals, but likely not have an impact on natural caused suffering. 

Thanks a lot for the post! I’m happy that people are trying to combine the field of longtermism and animal welfare. 

 

Here’s a few initial thoughts from a non-professional (note I didn't read the full post so I might have missed something): 

I generally believe that moral circle expansion, especially for wild animals and artificial sentience, is one of the best universal ways to help ensure a net positive future. I think that invertebrates or artificial sentience will make up the majority of moral patients in the future. I also suspect this to be good in a number of different future scenarios, since it could lower the chance of s-risks and better the scenario for animals (or artificial sentience) no matter if there will be a lock-in scenario or not. 

I think progression in short-term direct WAW interventions is also very important since I find it hard to believe that many people will care about WAW unless they can see a clear way of changing the status quo (even if current WAW interventions will only have a minimal impact). I also think short-term WAW interventions could help to change the narrative that interfering in nature is inherently bad. 
(Note: I have personally noticed several people that have similar values as me (in terms of caring greatly about WAW in the far future) only caring little about short-term interventions.)

It could of course be argued that working directly on reducing the likelihood of certain s-risk and working on AI-alignment might be a more efficient way of ensuring a better future for animals. I certainly think this might be true, however I think these measures are less reliable due to the uncertainty of the future. 

I think Brian Tomasik has written great pieces on why animal-focused hedonistic imperative and gene drive might be less promising and more unlikely than it seems. I do personally also believe that it’s unlikely to ever happen on a large scale for wild animals. However, I think if it happens and it’s done right (without severely disrupting ecosystems), genetic engineering could be the best way of increasing net well-being in the long term. But I haven't thought that much about this. 

 

Anyways, I wouldn't be surprised if you already have considered all of these arguments. 

 

I’m really looking forward to your follow-up post :)  

I do agree that t in the formula is quite complicated to understand (and does not mean the same as the typical meant by tractability), I tried to explain it, but since no one edited my work, I might be overestimating the understandability of my formulations. "t" is something like “the cost-effectiveness of reducing the likelihood of x-risk by 1 percentage point” divided by “the cost-effectiveness of increasing net happiness of x-risk by 1 percent”.  

When that’s been said, I still think that the analysis lacks an estimation for how good the future will be. Which could make the numbers for "t" and "net negative future" (or u(negative)) “more objective”. 

I do somewhat agree (my beliefs on this has also somewhat changed after discussing the theory with others). I think "conventional" WAW work has some direct (advocacy) and indirect (research) influence on peoples values, which could help avoid or make certain lock-in scenarios less severe. However, I think this impact is less than I previously thought, and I'm now of the belive that more direct work into how we can mitigate such risk is more impactful. 

Load more