I run Sentinel, a team that seeks to anticipate and respond to large-scale risks. You can read our weekly minutes here. I like to spend my time acquiring deeper models of the world, and generally becoming more formidable. I'm also a fairly good forecaster: I started out on predicting on Good Judgment Open and CSET-Foretell, but now do most of my forecasting through Samotsvety, of which Scott Alexander writes:
Enter Samotsvety Forecasts. This is a team of some of the best superforecasters in the world. They won the CSET-Foretell forecasting competition by an absolutely obscene margin, “around twice as good as the next-best team in terms of the relative Brier score”. If the point of forecasting tournaments is to figure out who you can trust, the science has spoken, and the answer is “these guys”.
I used to post prolifically on the EA Forum, but nowadays, I post my research and thoughts at nunosempere.com / nunosempere.com/blog rather than on this forum, because:
But a good fraction of my past research is still available here on the EA Forum. I'm particularly fond of my series on Estimating Value.
My career has been as follows:
You can share feedback anonymously with me here.
Note: You can sign up for all my posts here: <https://nunosempere.com/.newsletter/>, or subscribe to my posts' RSS here: <https://nunosempere.com/blog/index.rss>
These are two different games. The joint game would be
Value of {}: 0 Value of {1}: 60 Value of {2}: 0 Value of {1,2}: 100`
and in that game player one is indeed better off in shapley value terms if he joins together with 2.
I'll let you reflect on how/whether adding an additional option can't decrease someone's shapley value, but I'll get back to my job :)
Thanks Felix, great question.
I'm not sure I'm following, and suspect you might be missing some terms; can you give me an example I can plug into shapleyvalue.com ? If there is some uncertainty that's fine (so if e.g., in your example Newton has a 50% chance of inventing calculus and ditto for Leibniz, that's fine).
Seems true assuming that your preferred conversion between human and animal lives/suffering is correct, but one can question those ranges. In particular, it seems likely to me that how much you should value animals is not an objective fact of life but a factor that varies across people.
Reminds me of 2019, strongly upvoted and sent to a few people.
Some thoughts that come to mind:
The SFF has a section on what they would do with more money, which you could use to cooperate with them if you wanted to. https://survivalandflourishing.fund/sff-2024-further-opportunities
Looked at the paper. The abstract says:
So I think you are overstating it a bit, i.e., it's hard to support statements about existential risks coming from classified risks vs unknown unknowns/black swans. But if I'm getting the wrong impression I'm happy to read the paper in depth.