Jens Aslaug 🔸

Dentist
103 karmaJoined Working (0-5 years)Danmark

Bio

Dentist doing earning to give. I have pledged to donate a minimum of 50 % (aiming for 60%) or 40.000-50.000 $ annually (at the beginning of my career). While I do expect to mainly do "giving now", I do plan, in periods of limited effective donating opportunities, to do "investing to give".  

As a longtermist and total utilitarian, finding the cause that increases utilities (no matter the time or type of sentient being) the most time/cost-effectively is my goal. In the pursuit of this goal, I so far care mostly about: alignment, s-risk, artificial sentience and WAW (wild animal welfare) (but feel free to change my mind).  

I heard about EA for the first time in 2018 in relation to an animal rights organization I worked for part-time (Animal Alliance in Denmark). However I have only had minimal interactions with EAs.

Male, 25 years old and diagnosed with aspergers (autism) and dyslexia. 

Comments
18

I have a question I would like some thoughts on:

As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.

I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:

  1. ~5% likelihood: ~101000  (Very good future e.g. hedonium shockwave)
  2. ~90% likelihood: -1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
  3. ~5% likelihood: ~-10100 (s-risk like scenarios)

 

My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering. 

Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good. 

 So now I’m asking, what am I getting wrong? Why is the future likely to be net positive? 

Well I totally agree with you, and the simple answer is - I don't. All of the graph/tables I made (except fig. 5 - which was only a completely made up example) are based on averages (including made up numbers that's supposed to represent the average). They don't take into account personal factors - like your income or how expensive rent+utilities are in your area. Therefore the models should only be used as a very rough guide, that can't work by itself (I guess one could make a more complex model that includes these factors). Therefore one should also make a budget, to see how it would look in real life, as I suggested in this section. 

X-risk reduction (especially alinement) is highly neglected and it's less clear how our actions can impact the value of the future. However, I think the impact of both is very uncertain and I still think working on s-risk reduction and long-termest animal work is of high impact. 

I agree. :) Your idea of lobbying and industry-specific actions might also be more neglected. In terms of WAW, I think it could help reduce the amount of human caused suffering to wild animals, but likely not have an impact on natural caused suffering. 

Thanks a lot for the post! I’m happy that people are trying to combine the field of longtermism and animal welfare. 

 

Here’s a few initial thoughts from a non-professional (note I didn't read the full post so I might have missed something): 

I generally believe that moral circle expansion, especially for wild animals and artificial sentience, is one of the best universal ways to help ensure a net positive future. I think that invertebrates or artificial sentience will make up the majority of moral patients in the future. I also suspect this to be good in a number of different future scenarios, since it could lower the chance of s-risks and better the scenario for animals (or artificial sentience) no matter if there will be a lock-in scenario or not. 

I think progression in short-term direct WAW interventions is also very important since I find it hard to believe that many people will care about WAW unless they can see a clear way of changing the status quo (even if current WAW interventions will only have a minimal impact). I also think short-term WAW interventions could help to change the narrative that interfering in nature is inherently bad. 
(Note: I have personally noticed several people that have similar values as me (in terms of caring greatly about WAW in the far future) only caring little about short-term interventions.)

It could of course be argued that working directly on reducing the likelihood of certain s-risk and working on AI-alignment might be a more efficient way of ensuring a better future for animals. I certainly think this might be true, however I think these measures are less reliable due to the uncertainty of the future. 

I think Brian Tomasik has written great pieces on why animal-focused hedonistic imperative and gene drive might be less promising and more unlikely than it seems. I do personally also believe that it’s unlikely to ever happen on a large scale for wild animals. However, I think if it happens and it’s done right (without severely disrupting ecosystems), genetic engineering could be the best way of increasing net well-being in the long term. But I haven't thought that much about this. 

 

Anyways, I wouldn't be surprised if you already have considered all of these arguments. 

 

I’m really looking forward to your follow-up post :)  

I do agree that t in the formula is quite complicated to understand (and does not mean the same as the typical meant by tractability), I tried to explain it, but since no one edited my work, I might be overestimating the understandability of my formulations. "t" is something like “the cost-effectiveness of reducing the likelihood of x-risk by 1 percentage point” divided by “the cost-effectiveness of increasing net happiness of x-risk by 1 percent”.  

When that’s been said, I still think that the analysis lacks an estimation for how good the future will be. Which could make the numbers for "t" and "net negative future" (or u(negative)) “more objective”. 

I do somewhat agree (my beliefs on this has also somewhat changed after discussing the theory with others). I think "conventional" WAW work has some direct (advocacy) and indirect (research) influence on peoples values, which could help avoid or make certain lock-in scenarios less severe. However, I think this impact is less than I previously thought, and I'm now of the belive that more direct work into how we can mitigate such risk is more impactful. 

If I understand you correctly you believe the formula does not take into account how good the future will be. I do somewhat agree that there is a related problem in my analysis, however I don't think that the problem is related to the formula. 

The problem your talking about is actually being taken into account by "t". You should note that the formula is about "net well-being", so "all well-being" minus "all suffering". So if future "net well-being" is very low, then the tractability of WAW will be high (aka "t" being low). E.g. lets say "net well-being" = 1 (made up unit), than it's gonna be alot easier to increase by 1 % than if "net well-being" = 1000.  

However I do agree that estimations for expectations on how good the future is going to be, is technically needed for making this analysis correctly. Specifically for estimating "t" and "net negative future" (or u(negative)) in for the "main formula". I may fix this in the future.  

(I hope it’s not confusing that I'm answering both your comments at once). 

While I will have to consider this for longer, my preliminary thought is that I agree with most of what you said. Which means that I might not believe in some of my previous statements.  

Thanks for the link to that post. I do agree and I can definitely see how some of these biases have influenced a couple of my thoughts. 

--

On your last point, but future-focused WAW interventions, I'm thinking of things that you mention in the tractability section of your post:...

Okay, I see. Well actually, my initial thought was that all of those four options had a similar impact on the longterm future. Which would justify focusing on short-term interventions and advocacy (which would correspond with working on point number three and four). However after further consideration, I think the first two are of higher impact when considering the far future. Which means I (at least for right now) agree with your earlier statement: 

“So rather than talking about "wild animal welfare interventions", I'd argue that you're really only talking about "future-focused wild animal welfare interventions". And I think making that distinction is important, because I don't think your reasoning supports present-focused WAW work.”

While I still think the “flow through effect” is very real for WAW, I do think that it’s probably true working on s-risks more directly might be of higher impact. 

--

I was curious if you have some thoughts on these conclusions (concluded based on a number of things you said and my personal values): 

  • Since working on s-risk directly is more impactful than working on it indirectly, direct work should be done when possible. 
  • There is no current organization working purely on animal related s-risk (as far as I know). So if that’s your main concern, your options are start-up or convincing an “s-risk mitigation organization” that you should work on this area full time.
    • Animal Ethics works on advocating moral circle expansion. But since this is of less direct impact to the longterm future, this has less of an effect on reducing s-risk than more direct work. 
  • If you’re also interested in reducing other s-risks (e.g. artificial sentience), then working for an organization that directly tries to reduce the probability of a number of s-risk is your best option (e.g. Center on Long-Term Risk or Center for Reducing Suffering). 
Load more