Jim Buhler

S-Risk research Grantee @ Polaris Ventures
451 karmaJoined Working (0-5 years)London, UK

Bio

Participation
4

My main focuses at the moment:
▪ S-risk macrostrategy (e.g., what AI safety proposals decrease rather than increase s-risks?)
▪ How to improve the exchange of knowledge in the s-risk community, and other s-risk field-building projects.

Previously, I worked in organizations such as EA Cambridge and EA France (community director), Existential Risk Alliance (research fellow), and the Center on Long-Term Risk (events and community associate). 

I've also researched various longtermist topics (some of it posted on here) and completed a Master's in moral philosophy.

I've also written some stuff on LessWrong.

You can give me anonymous feedback here. :)

Sequences
1

What values will control the Future?

Comments
50

Topic contributions
4

I don't think Andreas Morgesen ever gave a talk on his (imo underrated) work on maximal cluelessness which has staggering implications for longtermists. And I find all the arguments that have been given against his conclusions (see e.g the comments under the above-linked post or under this LW question from Anthony DiGiovanni) quite unconvincing.

My main crux regarding inter-civ selection effect is how fast will space colonization get. F.e. if it's possible to produce small black holes, you can use them for an incredibly efficient propulsion and even just slightly grabby civs still spread at approximately the speed of light - roughly the same speed as extremely grabby civs. Maybe it's also possible with fusion propulsion but I'm not sure - you'd need to ask astro nerds.

I haven't thought about whether this should be the main crux but very good point! Magnus Vinding and I discuss this in this recent comment thread.

I guess the main hope is not that morality gives you a competitive edge (that's unlikely) but rather that enough agents stumble on it anyway, f.e. realizing open/empty individualism is true, through philosophical reflection.

Yes. Related comment thread I find interesting here.

Yeah so I was implicitly assuming that even the fastest civilizations don't "easily" reach the absolute maximum physically possible speed such that what determines their speed is the ratio resources spent on spreading as fast as possible [1] : resources spent on other things (e.g., being careful and quiet).

I don't remember thinking about whether this assumption is warranted however. If we expect all civs to reach this maximum physically possible speed without needing to dedicate 100% of their resources to this, this drastically dampens the grabby selection effect I mentioned above.

[1] which if maximized would make the civilization loud by default in absence of resources spent on avoiding this I assume. (harfe in this thread gives a good specific instance backing up this assumption)

Interesting! Fwiw, the best argument I can immediately think of against silent cosmic rulers being likely/plausible is that we should expect these to expand slower than typical grabby aliens and therefore be less "fit" and selected against (see my Grabby Values Selection Thesis -- although it seems to be more about expansion strategies than about values here). 

Not sure how strong this argument is though. The selection effect might be relatively small (e.g. because being quiet is cheap or because non-quiet aliens get blown up by the quiet ones that were "hiding"?).

Interesting, thanks for sharing your thoughts on the process and stuff! (And happy to see the post published!) :)

Interesting, makes sense! Thanks for the clarification and for your thoughts on this! :)

 If I want to prove that technological progress generally correlates with methods that involve more suffering, yes! Agreed.

But while the post suggests that this is a possibility, its main point is that suffering itself is not inefficient, such that there is no reason to expect progress and methods that involve less suffering to correlate by default (much weaker claim).

This makes me realize that the crux is perhaps this below part more than the claim we discuss above. 



While I tentatively think the the most efficient solutions to problems don’t seem like they involve suffering” claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.

Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.

I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.

Sorry for the confusion and thanks for pushing back! Helps me clarify what the claims made in this post imply and don't imply. :)

I do not mean to argue that the future will be net negative. (I even make this disclaimer twice in the post, aha.) :)

I simply argue that the convergence between efficiency and methods that involve less suffering argument in favor of assuming it'll be positive is unsupported.

There are many other arguments/considerations to take into account to assess the sign of the future.

Thanks!

Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation?

Hum... not sure. I feel like my claims are very weak and true even in future worlds without autonomous advanced AIs.


"One large driver of humanity's moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others' suffering without undermining themselves".

Agreed but this is more similar to argument (A) fleshed out in this footnote, which is not the one I'm assailing in this post.

Load more