Clara Torres Latorre 🔸

Postdoc @ CSIC
157 karmaJoined Working (6-15 years)Barcelona, España

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
44

Fantastic post!

I'm trying to put myself in the shoes of someone that is new around here, and I would appreciate some definitions or links for acronyms (GHD, AIS), and meat eater problem. Maybe others as well, I haven't been thorough.

Can you please update the post, it would be even better in my opinion.

I would be very surprised if [neuron count + noiciceptive capacity as moral weight] are standard EA assumptions. I haven't seen this in the people I know nor in the major funders, who seem to be more pluralistic to me.

My main critique to this post is that there are different claims and it's not very clear which arguments are supporting what conclusions. I think your message would be more clear after a bit of rewriting, and then it would be easier to have an object-level discussion.

Hey, kudos to you for writing a longform about this. I have talked to some self-identified negative utilitarians, and I think this is a discussion worth having.

I think this post is mixing two different claims.

Critiquing “minimize suffering as the only terminal value → extinction is optimal” makes sense.

But that doesn’t automatically imply that some suffering-reduction interventions (like shrimp stunning) are not worth it.

You can reject suffering-minimization-as-everything and still think that large amounts of probable suffering in simple systems matter at the margin.

Also I appreciated the discussion of depth, but have nothing to say about it here.

I would appreciate:
 - Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesn't necessarily recommend extinction.
 - The OP clarifying the post by making more explicit the claims.

I like your post, especially the vibe of it.

At the same time, I have a hard time understanding what does "quit EA" even mean:

Stop saying you're EA? I guess that's fine.

Stop trying to improve the world using reason and evidence? Very sad. Probably read this post x50 and I hope it convinces you otherwise.

99% karma-weighted of tagged posts about AI seems wrong

if you check the top 4 posts of all time, the 1st and 3rd are about FTX, the 2nd about earning to give and the 4th about health, totalling > 2k karma

might want to check for bugs

I started, and then realised how complicated is to choose a set of variables and weights to make sense of "how privileged am I" or "how lucky am I".

I have an MVP (but ran out of free LLM assistance), and right now the biggest downside is that if I include several variables, the results tend to be far from the top. And I don't know what to do about this.

For instance, let's say that in "healthcare access", having good public coverage puts you in the top 10% bracket (number made up). Then, if you pick 95% as the reference point for that any weighted average including this will miss on some distance to the top.

So just a weighted average of different questions is not good enough I guess.

We can discuss and workshop it if you want.

I love the sentiment of the post, and tried it myself.

I think a prompt like this makes answers less extreme than what they actually are, because it's like a vibes-based answer instead of a model-based answer. I would be surprised if you are not in the top 1% globally.

I would really enjoy something like this but more model-based, as the GWWC calculator. Does anyone know of something similar? Should I vibe code it and then ask for feedback here?

I tried this myself and I got "you're about 10-15% globally", which I think is a big underestimate.

For context, pp adjusted income is top 2%, I have a PhD (1% globally? less?), live alone in an urban area.

Asking more, a big factor pushing down is that I rent the place that I live in instead of owning it (which, don't get me started on this from a personal finance perspective, but shouldn't be that big of a gap I guess?).

I don't identify as EA. You can check my post history. I try to form my own views and not defer to leadership or celebrities.

I agree with you that there's a problem with safetywashing, conflicts of interest and bad epistemic practises in mainstream EA AI safety discourse.

My problem with this post is that the way of presenting the arguments is like "wake up, I'm right and you are wrong", directed to a group of people that includes people that have never thought about what you're talking about, and people that agree with you.

I also agree that the truth sometimes irritates, but that doesn't mean that if something irritates I should trust it more.

I think there is a problem with the polls showing all the same title

fixed

Load more