Clara Torres Latorre 🔸

Postdoc @ CSIC
165 karmaJoined Working (6-15 years)Barcelona, España

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
50

Over the last decade, we should have invested more in community growth at the expense of research.

My answer is largely based on my view that short-timeline AI risk people are more dominant in the discourse that the credence I give them, ymmv

I would like to see more low quality / unserious content. Mainly, to lower the barrier to entry for newcomers and make it more welcoming.

Very unsure if this is actually a good idea.

I appreciate the irony and see the value in this, but I'm afraid that you're going to be downvoted into oblivion because of your last paragraph.

"At high levels of uncertainty, common sense produces better outcomes than explicit modelling"

Fantastic post!

I'm trying to put myself in the shoes of someone that is new around here, and I would appreciate some definitions or links for acronyms (GHD, AIS), and meat eater problem. Maybe others as well, I haven't been thorough.

Can you please update the post, it would be even better in my opinion.

I would be very surprised if [neuron count + noiciceptive capacity as moral weight] are standard EA assumptions. I haven't seen this in the people I know nor in the major funders, who seem to be more pluralistic to me.

My main critique to this post is that there are different claims and it's not very clear which arguments are supporting what conclusions. I think your message would be more clear after a bit of rewriting, and then it would be easier to have an object-level discussion.

Hey, kudos to you for writing a longform about this. I have talked to some self-identified negative utilitarians, and I think this is a discussion worth having.

I think this post is mixing two different claims.

Critiquing “minimize suffering as the only terminal value → extinction is optimal” makes sense.

But that doesn’t automatically imply that some suffering-reduction interventions (like shrimp stunning) are not worth it.

You can reject suffering-minimization-as-everything and still think that large amounts of probable suffering in simple systems matter at the margin.

Also I appreciated the discussion of depth, but have nothing to say about it here.

I would appreciate:
 - Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesn't necessarily recommend extinction.
 - The OP clarifying the post by making more explicit the claims.

I like your post, especially the vibe of it.

At the same time, I have a hard time understanding what does "quit EA" even mean:

Stop saying you're EA? I guess that's fine.

Stop trying to improve the world using reason and evidence? Very sad. Probably read this post x50 and I hope it convinces you otherwise.

Load more