Researcher @ Shapley Maximizers
12368 karmaJoined


I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.

I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at / rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.

I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship, and did a bunch of work for the FTX Foundation, which then went to waste when it evaporated. 

Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <>, or subscribe to my posts' RSS here: <>


Vantage Points
Estimating value
Forecasting Newsletter


Topic contributions

I don't think one year is enough time to observe effects. Anecdotically, I think (but am not sure) that I started to have problems after three years of being a vegetarian.

I don't think not banning users for first offences is necessary the highest bar I want to reach for. For instance, consider this comment. Like, to exaggerate this a bit, imagine receiving that comment in one of the top 3 worst moments of your life.

Prompted by a different forum: a small case study, the Effective Altruism forum has been impoverished over the last few years by not being lenient with valuable contributors when they had a bad day.

In a few cases, I later learnt that some longstanding user had a mental health breakdown/psychotic break/bipolar something or other. To some extent this is an arbitrary category, and you can interpret going outside normality through the lens of mental health, or through the lens of "this person chose to behave inappropriately". Still, my sense is that leniency would have been a better move when people go off the rails.

In particular, the best move seems to me a combination of:

  • In the short term, when a valued member is behaving uncharacteristically badly, stop them from posting
  • Followup a week or a few weeks later to see how the person is doing

Two factors here are:

  • There is going to be some overlap in that people with propensity for some mental health disorders might be more creative, better able to see things from weird angles, better able to make conceptual connections.
  • In a longstanding online community, people grow to care about others. If a friend goes of the rails, there is the question of how to stop them from causing harm to others, but there is also the question of how to help them be ok, and the second one can just dominate sometimes.

You could generalize a bit further by looking at the behavior of

  1. The integral of the ratio of the world under two interventions, or . This integral could have a value even if the integral of each intervention is indefinite.
  2. The ratio of the limit of integrals under two interventions, or . This could likewise have a value even if isn't defined

That said, this is a nice project, if you have a budget it shouldn't be hard to find one or a few OS enthusiasts to delegate this to.

My sense is that 100 is an underestimate for the number of OS libraries as important as that one. But I'm not sure if the correct number is 1k, 10k or 100k.

One possible path is to find a good leader that can scalably use labour and follow him?

I upvoted this offer. I have an alert for bet proposals on the forum, and this is the first genuine one I've seen in a while.

Load more