Anonymous feedback me: https://www.admonymous.co/tetraspace
One issue that comes up with multi-winner approval voting is: suppose there are 15 longtermists and 10 global poverty people. All the longtermists approve the LTFF, MIRI, and Redwood; all the global poverty people approve the Against Malaria Foundation, GiveWell, and LEEP.
The top three vote winners are picked: they're the LTFF, with 15 votes, MIRI, with 15 votes, and Redwood, with 15 votes.
It is maybe undesirable that 40% of the people in this toy example think those charities are useless, yet 0% of money is going to charities that aren't those. (Or maybe it's not! If a coin lands heads 60% of the time; then you bet on heads 100% of the time.)
Nowadays I would not be so quick to say that existential risk probability is mostly sitting on "never" 😔. This does open up an additional way to make a clock, literally just tick down to the median (which would be somewhere in the acute risk period).
I was looking for the address of the venue to plan travel, but couldn't find it on this events page, so I'll make a comment. It's on effectivealtruism.org here, namely:
Tobacco Dock, Tobacco Quay, Wapping Lane, London, E1W 2SF, London, United Kingdom.
Also, lending is somewhat of a commitment mechanism: if someone gets or buys a book, they have forever which can easily mean it takes forever, but if they borrow it there's time pressure to give it back which means either read it soon or lose it.
For fiction, AI Impacts has an incomplete list here sorted by what kind of failure modes they're about and how useful AI Impacts thinks they are for thinking about the alignment problem.
As of this comment: 40%, 38%, 37%, 5%. I haven't taken into account time passing since the button appeared.
With 395 total codebearer-days, a launch has occurred once. This means that, with 200 codebearers this year, the Laplace prior for any launch happening is 40% (1−(1−1396)200). The number of participants is about in between 2019 (125 codebearers) and 2020 (270 codebearers), so doing an average like this is probably fine.
I think there's a 5% chance that there's a launch but no MAD, because Peter Wildeford has publicly committed to MAD, says 5%, and he knows himself best.
I think the EA forum is a little bit, but not vastly, more likely to initiate a launch, because the EA Forum hasn't done Petrov day before and qualitatively people seem to be having a bit more fun and irreverance over here, so I'm giving 3% of the no-MAD probability to EA Forum staying up and 2% to Lesswrong staying up.
I looked up GiveDirectly's financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.
Another principle, conservation of total expected credit:
Say a donor lottery has you, who donates a fraction p of the total with an impact judged by you if you win of X, the other participants, who collectively donate a fraction q of the total with an average impact as judged by you if they win of Y, and the benefactor, who donates a fraction 1−p−q of the total with an average impact if they win of 0. Then total expected credit assigned by you should be pX+qY (followed by A, B and C), and total credit assigned by you should be X if you win, Y if they win, and 0 otherwise (violated by C).
I've been thinking of how to assign credit for a donor lottery.
Some ways that seem compelling:
Some principles about assigning credit:
Some actual uses of assigning credit and what they might say:
What were your impressions for the amount of non-Open Philanthropy funding allocated across each longtermist cause area?