Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7356 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
878

Topic contributions
1

In my personal view, there was a tremendous failure to capitalize on the crisis by global health security organizations, which were focused on stopping spread, but waited until around mid 2021 to start looking past COVID. This was largely a capacity issue, but it was also a strategic failure, and by the time anyone was seriously looking at things like the pandemic treaty, the window had closed.

This seems great - I'd love to see it completed, polished a bit, and possibly published somewhere. (If you're interested in more feedback on that process, feel free to ping me.)

I certainly agree it's some marginal evidence of propensity, and that the outcome, not the intent, is what matters - but don't you think that mistakes become less frequent with greater understanding and capacity?

Agreed on impacts - but I think intention matters when considering what the past implies about the future, and as I said in another reply, on that basis I will claim the great leap forward isn't a reasonable basis to predict future abuse or tragedy.

Thanks for writing and posting this!

I think it's important to say this because people often over-update on the pushback to things that they hear about, because of visible second order effects, but they don't notice the counterfactual is the thing in question not happening, which far outweighs those real but typically comparatively minor problems created.

Not to answer the question, but to add a couple links that I know you're aware of but didn't explicitly mention, there are two reasons that EA does better than most groups. First, the fact that EA is adjacent to and overlaps with the lesswrong-style rationality community, and the multiple years of texts on better probabilistic reasoning and why and how to reason more explicitly had a huge impact. And second, the similarly adjacent forecasting community, which was kickstarted in a real sense by people affiliated with FHI (Matheny and IARPA, Robin Hanson, and Tetlock's later involvement.)

Both of these communities have spent time thinking about better probabilistic reasoning, and have lots of things to say about not just the issue of thinking probabilistically in general instead of implicitly asserting certainty based on which side of 50% things are. And many in EA, including myself, have long-advocated the ideas being even more centrally embraced in EA discussions. (Especially because I will claim that the concerns of the rationality community keep being relevant to EA's failures, or being prescient of later-embraced EA concerns and ideas.)

Do you have any reason to think, or evidence, that the claimed downvoting occurred?

[This comment is no longer endorsed by its author]Reply

I think (tentatively) that making (even giant and insanely consequential) mistakes with positive intentions, like the great leap forward, is in a meaningful sense far less bad than mistakes that are more obviously aimed at cynical self benefit at the expense of others, like, say, most of US foreign policy in South America, or post-civil-war policy related to segregation.

Wait, did you want them to "denounce" the choice of shutting down USAID, or the individual?

Load more