CTL

Clara Torres Latorre 🔶️

Postdoc @ CSIC
24 karmaJoined Working (6-15 years)

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
11

Does the forum have a policy on necro posting (the act of commenting/editing "resurrecting" old material)? I didn't find it in the "how to use the forum" sequence.

Hi, just a small suggestion: could you make a more descriptive title for people that don't know what ALAMIME is?

For example:

Malaria control through R&D: ALAMIME Consortium Webinar

Speaking from what I've personally seen, but it's reasonable to assume it generalizes.

There's an important pool of burned out knowledge workers, and one of the major causes is lack of value alignment, i.e. working for companies that only care about profits.

I think this cohort would be a good target for a campaign:

  • Effective giving can provide meaning for the money they make
  • Dedicating some time to take on voluntary challenges can help them with burnout (if it's due to meaninglessness)
  1. Thank you for pointing out log utility, I am aware of this model (and also other utility functions). Any reasonable utility function is concave (diminishing returns), which can explain insurance to some extent but not lotteries.
  2. I could imagine that, for an altruistic actor, altruistic utility becomes "more linear" if it's a linear combination of the utility functions of the recipients of help. This might be defensible, but it is not obvious for me unless that actor is utilitarian, at least in their altruistic actions.

I am in a similar boat as you. I don't feel comfortable being identity-EA because I have some core philosophical disagreements.

However, I have been inspired by EA to the point of making some substantive life changes, and participate in my local EA group. I try to do things that are convincing enough for their own sake, even though I do not necessarily agree with all the premises.

I believe there is value to participation in the whatever-ist party, even if you are not comcortable calling yourself a whatever-ist, not because of ideological purity, but because it doesn't even feel true.

Question: how to reconcile the fact that expected value is linear with preferences being possibly nonlinear?

Example: people are tipically willing to pay more than expected value for a small chance of a big benefit (lottery), or to remove a small chance of a big loss (insurance).

This example could be rejected as a "mental bias" or "irrational". However, it is not obvious to me that linearity is a virtue, and even if it is, we are human and our subjective experience is not linear.

My uninformed guess is that an automatic system doesn't need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).

For example, the machine doesn't need to be agentic if there is a human agent deciding to make bad stuff happen.

So I think it would be an important point to discuss, and maybe someone has done it already.

Thank you for your comment. I edited my post for clarity. I was already thinking of x-risk or s-risk (both in AGI risk and in narrow AI risk).

(just speculating, would like to have other inputs)

 

I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk.

I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.

Thank you Toby.

I agree that to observe macroeconomic effects something has to be broad scale and my question was quite speculative.

On the other hand, about the Forum, I see that posts are like essays and appear informative. I wonder what is the right place to things that might be interesting or valuable, but don't fit the general vibe, for instance, just a question. Do they belong in here? As quick takes?

Load more