NunoSempere

Director, Head of Foresight @ Sentinel
12693 karmaJoined
nunosempere.com/blog

Bio

I run Sentinel, a team that seeks to anticipate and respond to large-scale risks. You can read our weekly minutes here. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking.
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.


My career has been as follows:

  • Before Sentinel, I set up my own niche consultancy, Shapley Maximizers. This was very profitable, and I used the profits to bootstrap Sentinel. I am winding this down, but if you have need of estimation services for big decisions, you can still reach out.
  • I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms—a more up to date alternative might be adj.news. I spent some time in the Bahamas as part of the FTX EA Fellowship, and did a bunch of work for the FTX Foundation, which then went to waste when it evaporated. 
  • I write a Forecasting Newsletter which gathered a few thousand subscribers; I previously abandoned but have recently restarted it. I used to really enjoy winning bets against people too confident in their beliefs, but I try to do this in structured prediction markets, because betting against normal people started to feel like taking candy from a baby.
  • Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term."
  • Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
1200

Topic contributions
14

Small scale (1-10k) epistemic infrastructure or experiments, like adj.news

I like how comprehensive this is.

Thanks. In some ways it's pretty casual; you could easily imagine a version with 10x or 100x more effort.

Minor, but existential risk includes more than extinction. So it could be "humans haven't undergone an unrecoverable collapse yet (or some other way of losing future potential)."

Agree!

some of these items are much more likely than others to kill 100M+ lives

Yeah, my intuition is that the ratio for solar flares seem particularly high here, because electrical system failure could be pretty correlated

Seems like a pretty niche worry, I wouldn't read too much into it not being discussed much. It's just that if true it does provide a reason to discount global health and development deeply.

Here are some caveats/counterpoints:

  • EA/OP does give large amounts of resources to areas that others find hard to care about, in a way which does seem more earnest & well-meaning than many other people in society
  • The alternative framework in which to operate is probably capitalism, which is also not perfectly aligned with human values either.
  • There is no evil mustache twirling mastermind. To the extent these dynamics arise, they do so out of some reasonably understandable constraints, like having a tight-knit group of people
  • In general it's just pretty harsh to just write a list of negative things about someone/some group
  • It's much easier to point out flaws than to operate in the world
  • There are many things to do in the world, and limited competent operators to throw at problems. Some areas will just see less love & talent directed to them. There is some meta-prioritization going on in a way which broadly does seem kind of reasonable.
Answer by NunoSempere57
3
0
3
5

The EA forum has tags. The one for criticisms of effective altruism is here: https://forum.effectivealtruism.org/topics/criticism-of-effective-altruism


Beyond that, here are some criticisms I've heard or made. Hope it helps:

Preliminaries:

  • EA is both a philosophy and a social movement/group of existing people. Defenders tend to defend the philosophy and in particular the global health part, which is more unambiguously good. However, many of the more interesting things happen on the more speculative parts of the movement.
  • A large chunk of non-global health EA or EA-adjacent giving is controlled by Open Philanthropy. Alternatively, there needs to be a name for "the community around Open Philanthropy and its grantees" so that people can model it. Hopefully this sidesteps some definitional counterarguments.

Criticism outlines:

  • Open Philanthropy has created a class of grants officers who, by dint of having very high salaries, are invested in Open Philanthropy retaining its current giving structure.
  • EA seduces some people into believing that they would be cherished members, but then leaves them unable to find jobs and in a worse position that they otherwise would have been if they had built their career capital elsewhere. cf. https://forum.effectivealtruism.org/posts/2BEecjksNZNHQmdyM/don-t-be-bycatch
  • EA the community is much more of a pre-existing clique and mutual admiration society than its emphasis on the philosophy when presenting itself would indicate. This is essentially deceptive, as it leads prospective members, particularly neurodivergent ones, to have incorrect expectations. cf. https://forum.effectivealtruism.org/posts/2BEecjksNZNHQmdyM/don-t-be-bycatch
  • It's amusing that the Center for Effective Altruism has taken a bunch of the energy of the EA movement, but itself doesn't seem to be particularly effective cf. https://nunosempere.com/blog/2023/10/15/ea-forum-stewardship/
  • EA has tried to optimize movement building naïvely, but focus on metrics has led it to focus on the most cost-effective interventions for the wrong thing, in a way which is self-defeating cf. https://forum.effectivealtruism.org/posts/xomFCNXwNBeXtLq53/bad-omens-in-current-community-building
  • Worldview diversification is an ugly prioritization framework that generally doesn't follow from the mathematical structure of anything but rather from political gerrymandering cf. https://nunosempere.com/blog/2023/04/25/worldview-diversification/
  • Leadership strongly endorsed FTX, which upended many plans from the rank and file after it turned out to be a fraud and its promised funding was recalled
  • EA has a narrative about how it searches for the best interventions using tools like cost-effectiveness analyses. But for speculative interventions, you need a lot of elbow grease, judgment calls. You have a lot of degrees of freedom. This amplies/enables clique dynamics.
  • Leaders have been somewhat hypocritical around optimizing strongly, with a "do what I say not what I do" attitude towards deontological constraints. cf. https://forum.effectivealtruism.org/posts/5o3vttALksQQQiqkv/consequentialists-in-society-should-self-modify-to-have-side
  • The community health team lacks many of the good qualities of a (US) court, such as the division of powers between judge, jury and executioner, or the possibility to confront one's accuser, or even know what one has been accused of. It is not resilient to adversarial manipulation, and priviledges the first party when both have strong emotions.
  • EA/OP doesn't really know how to handle the effects of throwing large amounts of money on people's beliefs. Throwing money at a particular set of beliefs makes it gain more advocates and harder to update away from it. Selection effects will apply at many levels. cf. https://nunosempere.com/blog/2023/01/23/my-highly-personal-skepticism-braindump-on-existential-risk/
  • EA is trapped in a narrow conceptual toolkit, which makes critics very hard to understand/slightly schizophrenic once they step away from that toolkit. cf. Milan Griffes

Finally, for global health, something which keeps me up at night is the possiblity that subsaharan Africa is trapped in a malthusian equilibrium, where further aid only increases the population which increases suffering.

Here are some I made for Benjamin Todd (previously mentioned here)... right before FTX went down. Not sure how well they've aged.

The previous version of this post had a comment from Julia Wise outlining some of her past mistakes, as well as a reply from Alexey Guzey (now deleted, but you can see some of the same contents below the table of contents here). You can also see comments from Julia here and here reflecting on her handling of complaints against Owen Cotton-Baratt. I think these are all informative in terms of predicting that sometimes the people pointed at in this post can fail as well.

It does feel like a just-so story either way

Yeah, possible. It's just been on my mind since FTX.

Load more