I am open to work.
You can give me feedback here (anonymous or not).
You are welcome to answer any of the following:
Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.
Thanks for the post, Tyler!
There are a lot of ways to arrange 86 billion neurons. You could give them to one human, to 430 rats, or to 86 billion nematodes.
The above implies nematodes have 1 neuron, but they have around 300 neurons. So 86 billion neurons correspond to around 300 M nematodes.
For classical utilitarians, “hedonium” is likely many orders of magnitude more valuable than human brains (or the equivalent instantiated in silico).
I estimated the welfare range per calorie consumption of bees is 4.88 k times that of humans, which suggests bees produce welfare 4.88 k times as efficiently if welfare is proportional to the welfare range.
Hi Tom,
It depends on the organisations which would receive the additional donations. If the person quitting their job donates 10 % of their gross annual salary to an organisation 10 times as cost-effective as their initial organisation, their donations doubled as a result of quitting, there was no impact from direct work in the new organisation, and they were not replaced in their original organisation, their annual impact after quitting would become 1.82 (= (0 + 0.1*10*2)/(0.1 + 0.1*10)) times as large as their initial annual impact.
What is the period of time to which "most intense" refers to? Any period of time, or the typical lifespan of the species? If the former, the welfare ranges practically refer to the intensities of very short experiences (for example, the worst possible second is worse than a random second of the worst possible minute).
Thanks for clarifying, Steven! I am happy to think about advanced AI agents as a new species too. However, in this case, I would model them as mind children of humanity evolved through intelligent design, not Darwinian natural selection that would lead to a very adversarial relationship with humans.
Thanks, David. I estimate the annual conflict deaths as a fraction of the global population decreased 0.121 OOM/century from 1400 to 2000 (R^2 of 8.45 %). In other words, I got a slight downwards trend despite lots of technological progress since 1400.
Even if historical data clearly pointed towards an increasing risk of conflict, the benefits could be worth it. Life expectancy at birth accounts for all sources of death, and it increases with real GDP per capita across countries.
The historical tail distribution of annual conflict deaths also suggests a very low chance of conflicts killing more than 1 % of the human population in 1 year.
Interesting points, Steven.
So what if it’s 30 years away?
I would say the median AI expert in 2023 thought the median date of full automation was 2073, 48 years (= 2073 - 2025) away, with a 20 % chance before 2048, and 20 % chance after 2103.
Or as Stuart Russell says, if there were a fleet of alien spacecraft, and we can see them in the telescopes, approaching closer each year, with an estimated arrival date of 2060, would you respond with the attitude of dismissal? Would you write “I am skeptical of alien risk” in your profile? I hope not! That would just be crazy way to describe the situation viz. aliens!
Automation would increase economic output, and this has historically increased human welfare. I would say one needs strong evidence to overcome that prior. In contrast, it is hard to tell whether aliens would be friendly to humans, and no past evidence based on which one can establish a strong pessimistic or optimistic prior.
I can imagine someone in 2000 making an argument: “Take some future date where we have AIs solving FrontierMath problems, getting superhuman scores on every professional-level test in every field, autonomously doing most SWE-bench problems, etc. Then travel back in time 10 years. Surely there would already be AI doing much much more basic things like solving Winograd schemas, passing 8th-grade science tests, etc., at least in the hands of enthusiastic experts who are eager to work with bleeding-edge technology.” That would have sounded like a very reasonable prediction, at the time, right? But it would have been wrong!
I could also easily imagine the same person predicting large scale unemployment, and a high chance of AI catastrophes once AI could do all the tasks you mentioned, but such risks have not materialised. I think the median person in the general population has historically underestimated the rate of future progress, but vastly overestimated future risk.
I feel like you overestimated Sinergia's role in achieving their listed cage-free commitments. Among the 5 very big or giant ones driving their cost-effectiveness, you attributed 20 % of the impact to Sinergia in 2 cases, and 50 % in 1 case where they did not run a campaign or pre-campaign, and did not send a campaign notice.
Thanks for the post!
For context, the expected lifetime of the universe based on the natural rate of vacuum decay is estimated to be 10^790 years.