I am looking for work, and welcome suggestions for posts.
How others can help me
I am looking for work. I welcome suggestions for posts. You can give me feedback here (anonymous or not). Feel free to share your thoughts on the value (or lack thereof) of my posts.
How I can help others
I can help with career advice, prioritisation, and quantitative analyses.
It still means that there is a 2.76% chance that you and everybody you love will be dead in the next 15 years.
The below taildistributions of the annual conflict and epidemic/pandemic deaths as a fraction of the global population suggest human extinction is orders of magnitude less likely than 10 % of the human population dying in 1 year. Do you think it is basically certain that the human population in 2040 (= 2025 + 15) will be lower than in 2025? If not, how do you justify the tail distribution of AI risk being so different than those from conflict and pandemic risk, considering these are major pathways via which AI risk can be expressed? @Peter Wildeford, I would be curious to know your thoughts too.
Relatedly, Table 1 of the report on the Existential Risk Persuasion Tournament (XPT) shows there was much more agreement between superforecasters and experts about catastrophic risk than extinction risk.
Thanks, Tejas. I now estimate broiler welfare and cage-free corporate campaigns benefit soil animals 444 and 28.2 times as much as they benefit chicken for my best guess that soil animals have negative lives. So accounting for wild animals made me update towards chicken welfare reforms being much more cost-effective. However, I have still updated against these reforms in the sense I now think there is a much greater fraction of philanthropic spending which is more cost-effective than them. I estimate broiler welfare and cage-free corporate campaigns are 68.9 % (= 744/(1.08*10^3)) and 12.4 % (= 134/(1.08*10^3)) as cost-effective as GiveWell's top charities accounting for target beneficiaries and soil animals, whereas I had estimated them to be 168 and 462 times as cost-effective as GiveWell's top charities accounting only for target beneficiaries.
I qualitatively agree with the points you make, but I do not support a global pause. I think tail risk is very low (I guess the risk of human extinction over the next 10 years is 10^-7), the upside very large, and I expect people to overreact to AI risk. For example, it seems that people dislike deaths caused by autonomous driving much more than deaths caused by human driving, and I expect the adoption of autonomous cars to be slower than what would be ideal to prevent deaths from road accidents. I would be very surprised if a global pause passed a standard cost-benefit analysis in the sense of having a benefit-to-cost-ratio higher than 1.
Some complain there should be much more spending on AI safety because it is currently much smaller than that on AI capabilities, but these categories are vague, and I have not seen any detailed quantitative modelling showing that increasing spending on AI safety is very cost-effective. I do not think one can assume the spending on each category should ideally be the same.
Would you still be concerned if GWWC just reported the total recorded donations in $, and the number of pledges being fulfilled based on recorded donations? I agree the way they are reporting the number of pledgers should be updated, and I had expressed my concerns about this 14 months ago (I have added the email I sent them then to my previous comment). However, I do not think GWWC is taking undue credit in their cost-effectiveness analyses.
How many $ would Rethink Priorities (RP) have to receive to estimate welfare ranges for nematodes and plants with the methodology you used to get the estimates you presented? I would be happy to donate 3 k$ for this. You estimated nematodes have a probability of sentience of 6.8 %, 82.9 % (= 0.068/0.082) of the 8.2 % of silkworms, and included nematodes and plants in your sentience table.
I liked listening to Gabby performing āAidoni Ton Platronā with ACAPOLLiNATiONS.
The breadth of your love through voice burst the sound barrier in moments like these - those in the room that day will remember three women, in all of their vulnerability and love power, bringing the house to a standing ovation. You are loved, not just for shining, but in your darkness also. And we share this trace of your voice as an encouragement - to you on your journey beyond, and to those left behind, wondering... all our love.
I think most climate scientists do not predict human extinction from warming
I very much agree, and guess Toby's and Will's estimates for the existential risk from climate change are much higher than the median expert's guess for the risk of human extinction from climate change. Toby guessed an existential risk from climate change from 2021 to 2120 of 0.1 %. Richards et al. (2023) estimates "ā¼6 billion deaths [they say "ā¼5 billion people" elsewhere] due to starvation by 2100" for a super unlikely "ā¼8ā12 °C+" of global warming by then, and I think they hugely overestimated the risk. Most importantly, they assumed land use and cropland area to be constant.
If AIs which are at this "significantly helping amateurs" capability threshold are released with open weights, I think that would kill around 100,000 people per year in expectation (relative to the counterfactual where no such models are released with open weights and closed-weight models have high quality safeguards for preventing assistance with bioweapons).
Do you know about any models estimating the expected deaths from pandemics due to open-weight models? I estimated the mean annual deaths from epidemics/pandemics as a fraction of the global population from 1900 to 2023 to be 0.0287 %, which implies expected deaths from epidemics/pandemics of 2.36 M (= 2.87*10^-4*8.23*10^9) for the population in 2025. So I think your guess of 100 k expected deaths corresponds to an increase in the expected deaths from epidemics/pandemics of 4.24 % (= 100*10^3/(2.36*10^6)). I have little idea about whether this is too low or high.