NunoSempere

Researcher @ Shapley Maximizers
12475 karmaJoined
nunosempere.com/blog

Bio

I am an independent research and programmer working at my own consultancy, Shapley Maximizers ÖU. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable.  I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:

Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.


I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:

  • I disagree with the EA Forum's moderation policy—they've banned a few disagreeable people whom I like, and I think they're generally a bit too censorious for my liking. 
  • The Forum website has become more annoying to me over time: more cluttered and more pushy in terms of curated and pinned posts (I've partially mitigated this by writing my own minimalistic frontend)
  • The above two issues have made me take notice that the EA Forum is beyond my control, and it feels like a dumb move to host my research in a platform that has goals different from my own. 

But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.


I used to do research around longtermism, forecasting and quantification, as well as some programming, at the Quantified Uncertainty Research Institute (QURI). At QURI, I programmed Metaforecast.org, a search tool which aggregates predictions from many different platforms, which I still maintain. I spent some time in the Bahamas as part of the FTX EA Fellowship, and did a bunch of work for the FTX Foundation, which then went to waste when it evaporated. 

Previously, I was a Future of Humanity Institute 2020 Summer Research Fellow, and then worked on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." I used to write a Forecasting Newsletter which gathered a few thousand subscribers, but I stopped as the value of my time rose. I also generally enjoy winning bets against people too confident in their beliefs.

Before that, I studied Maths and Philosophy, dropped out in exasperation at the inefficiency, picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 2019, 2020 and 2022; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.


You can share feedback anonymously with me here.

Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>

Sequences
3

Vantage Points
Estimating value
Forecasting Newsletter

Comments
1186

Topic contributions
14

I'm subscribed to the "Organizations update" tab, so I get notifications when a new post in that category appears, but I can't unsubscribe. This has been a mild annoyance for a few years. Clicking subscribe and unsubscribe on the page doesn't do anything. Could someone fix it?

Hey, I thought this was thought provoking.

I think with fictional characters, they could be suffering while they are being instantiated. E.g., I found the film Oldboy pretty painful, because I felt some of the suffering of the character while watching the film. Similarly, if a convincing novel makes its readers feel the pain of the characters, that could be something to care about.

Similarly, if LLM computations implement some of what makes suffering bad—for instance, if they simulate some sort of distress internally while stating the words "I am suffering", because this is useful in order to make better predictions—then this could lead to them having moral patienthood.

That doesn't seem super likely to me, but as you have llms that are more and more capable of mimicking humans, I can see the possibility that implementing suffering is useful in order to predict what an agent suffering would output.

Here is a chaser: How can the EA community be useful to you in helping you do more good? Are there any bottlenecks you have in doing more of this stuff that could be solved with a 10k strong but weakly coordinated community? In the hypothetical extreme where you Darren, or Mr Beast, were made king of EA for a week, or for a year, what would you do with that?

Have you considered coordinating your massive audience to achieve some political outcome, e.g., repealing the Jones act?

What is the rough amount Beast Philanthropy is planning to give out yearly?

Kudos on the Give Directly video!

I agree that doctos with interesting views have done experiments without the consent of patients in the past.

I agree that, with low enough state capacity, if you can't differentiate between Stöcker and a crank, you might want to ban all of them. However, I could also see the case for a) not banning anything, and letting the population learn to differentiate cranks vs non cranks over a few generations, or b) developing more state capacity so that you can in fact differentiate between these.

I'm not sure whether I agree on the direction of causality. Opaque bureaucratic decision => politics takes a role also makes sense to me.

I think it's very unlikely that his actual vaccine was worse than the disease, and so the RCT-ing a parachute analogy is valid.

I also think that in saying "not letting someone bypass regulations to inject people with a solution claimed to be a pandemic cure because they have relevant qualifications and claim to have validated it's safe and works on five people" you're skipping over the part where you can have a mechanistic understanding of why and how vaccines work.

Basically, agree that if you squint, this looks like other things that could be bad, and that if the state can only squint, it might want to apply violence to prevent it. But that doesn't seem like the only alternative to me.

The counterfactual world still never develops this vaccine...

This is a good point. I think the counterfactual world I was thinking of was one in which the world is as it was, but this vaccine proposal/Stöcker acts as an exogenous shock and makes some part of the German population take a vaccine earlier. But you're right that there is also a counterfactual where this exogenous shock isn´t needed at all.

You touch on a few points on your second paragraph; to respond to a few:

  1. Impossibility of trust. Presumably this affects different groups differently, and his political inclination might have made people sympathetic to right-wing conspiracy theories more likely to take his particular vaccine, where in actuality they instead where one of the most vaccine-hesitant groups. This seems fine to me.
  • Differential impact on immigrants: Specifically, having an intervention which differentially helps nonimmigrants seems fine by me. It's particularly salient to me here that a) immigrants wouldn't be harmed by someone else taking this vaccine, b) in fact they might be helped if it reduces the spread. You can also make things clearer by pairing this with a second intervention to make vaccines more appealing to immigrants in particular, but I don't think this is necessary to make it a Pareto improvement.
  1. Origins of requiring control and rigor on vaccines. I agree that past disasters are a reason to impose controls and rigor on vaccine development.
  2. Wisdom of requiring long randomized trials before allowing people to take vaccine candidates. I disagree that past disasters were a strong enough reason in the face of a disease of uncertain long-term effects and a professor of immunology who created Euroinmun offering an alternative.

Bill Gates selling Microsoft stock at the advice of Buffett, Stöcker selling Euroinmun, SBF selling customer assets & a chunk of FTX to Binance.

I agree; I could imagine the downside risks being larger, but this would surprise me.

I don't think one year is enough time to observe effects. Anecdotically, I think (but am not sure) that I started to have problems after three years of being a vegetarian.

Load more