OK

Oliver Kuperman

31 karmaJoined

Posts
1

Sorted by New

Comments
4

This is a very difficult question to answer, as it depends very heavily on the specifics of each scenario, as well as which groups of animals you consider to have sentience, and your default estimations of how worthwhile their lives are. For AI, I think the standard paper-clipper/ misaligned super intelligence probably doesn’t go as far to kill all complex biological life immediately, as unlike humans, most animals would not really pose a threat to its goals/compete with it for resources. However, in the long run, I assume a lot of life would die off as AI develops industry without regard for the environmental effects (robots do not need much clean air, or water, or low-acidity oceans). In the long, long run, I do not see why an AI system would not construct a Dyson sphere.


Ultimately, however, I do not think this really changes the utility of these scenarios, as human civilization is  also mostly indifferent to animals. The existence of factory farming (which will last longer with humans, as humans enjoy meat while AI will probably not care about it) probably will out weigh any potential pro-wild-animal welfare efforts pursued by humanity. 
 

For non-AI extinction risk (nuclear war, asteroids, super volcanoes) sentient animal populations will  sharply decline and then gradually recover, just as they have done in reaction to previous mass extinction events.


TLDR: 

For essentially all extinction scenarios, the utility value calculation is based on the difference between long-term and short term human flourishing against short-term factory farming of animals farmed for humans. Wild animals have similar expected utility in all scenarios, especially if you think they have about net-neural utility in their lives on average, as they will either persist unaffected or die (maybe at some point humanity will want to intervene to help wild animals have net-positive lives, but this is highly uncertain).


 

Fair point. In the future I will think more about acknowledging that distinction, especially when engaging with an audience (unlike this one) which isn't already comfortable with EA.

I agree on the Metaculus prompt being unclear, but the manifold market is far more clear. For the purposes of resolution: 

“Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply its intelligence to a wide variety of problems, much like a human being. Unlike narrow or weak AI, which is designed and trained for specific tasks (like language translation, playing a game, or image recognition), AGI can theoretically perform any intellectual task that a human being can. It involves the capability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.”


That definition seems to imply a system that will permanently change the knowledge economy, but anyways, I am more interested in what I should do with my time before such a system is developed.

My concerns are oriented around CS specifically, as I literally feel I am currently worse at coding than Chat GPT, that AI will continue to improve at it in the near term, and that the CS job market will become hyper competitive due to the combination of an oversupply/ backlog of CS majors and a shrinking number of job entries. 


Thanks for the recommendation about 80K advising, that seems like a good resource.