Clara Torres Latorre 🔸

Postdoc @ CSIC
218 karmaJoined Working (6-15 years)Barcelona, España

Participation
2

  • Completed the Introductory EA Virtual Program
  • Attended more than three meetings with a local EA group

Comments
58

Cool (:

I'm specifically interested in automating filtering EA-related opportunities and events to write our weekly announcements.

I think with a bit of tweaking that would be a public good for EA community building and might be re used by many groups.

Hey, cool toy model (:

I bet there's not enough data on METR about how messy are the tasks to include it here, but I would expect it to have real world consequences and to tug in the direction of agents being less viable outside of well defined domains.

Very interesting critique. I've seen this kinds of comments in academic circles doing evals work, and there have been attempts to improve the situation such as the General Scales Framework:

https://arxiv.org/abs/2503.06378

Think of it as passing an IQ test instead of a school exam, more predictive power. It's not percect ofc but thankfully some people are really taking this seriously.

I think allowing this debate to happen would be a fantastic opportunity to put our money where our mouth is regarding not ignoring systemic issues:
https://80000hours.org/2020/08/misconceptions-effective-altruism/#misconception-3-effective-altruism-ignores-systemic-change

On the other hand, deciding that democratic backsliding is off limits, and not even trying to have a conversation about it, could (rightfully, in my view) be treated as evidence of EA being in an ivory tower and disconnected from the real world.

I was thinking the same, a bet resistant to something like COVID + rebound would be more in the spirit of the argument.

Maybe gdp growth over previous all-time high?

I think you have a point. However, I strongly disagree with the framing of your post, for several reasons.

One is advertising your hedge fund here, that made me doubt of the entire post.

Second, is that the link does not go to a mathematical paper, rather to the whitepapers section of your startup. Nevertheless, I believe the first PDF there appears to be the math behind your post.

Third, calling that PDF a mathematical proof is a stretch (at least, from my pov as a math researcher). Expressions like "it is plausible that" never belong in a mathematical proof.

And most importantly, the substance of the argument:

In your model, you assume that effort by allies depends on the actor's confidence signal (sigma), and that allies' contribution is monotonic (larger if the actor is more confident). I find this assumption questionable, since, from an ally/investor perspective, unwarranted high confidence can undermine trust.

Then, you conflate the fact that the optimal signal is higher (when optimizing for outcomes) than the optimal forecast (when optimizing for accuracy) as an indication against calibration. I would take it as an indication for calibration, but including possible actions (such as signaling) as variables to optimize for success.

In my view, your model is a nice toy model to explain why, in certain situations, signaling more confidence than what would be accurate can be instrumental.

Ironically, your post and your whitepaper do what they recommend, using expressions like "demonstrate" and "proof" without properly acknowledging that most of the load of the argument rests on the modelling assumptions.

How much time is this expected/recommended to take?

  1. Depends on what you count as meaningful earning potential.

    One of the big ideas that I take from the old days of effective altruism, is that strategically donating 10% of the median US salary can save more lives than becoming a doctor in the US over one's career.

    Same logic applies to animal welfare, catastrophic risk reduction, and other priorities.
     
  2. A different question is would you be satisfied with having a normal job and donating 10% (or whatever % makes sense in your situation)?
     

Over the last decade, we should have invested more in community growth at the expense of research.

My answer is largely based on my view that short-timeline AI risk people are more dominant in the discourse that the credence I give them, ymmv

Load more