I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
I see, I took the chart under "The compensation schedule's structure" to imply that the Axiom of Continuity held for suffering, based on the fact that the X axis shows suffering measured on a cardinal scale.
If you reject Continuity for suffering then I don't think your assumptions are self-contradictory.
Yeah it's also something I want to get more clarity on. This post is about the step of the chain that goes from "donate money to campaign" -> "candidates gets elected", but it's harder to say what happens after that. I'm working on some future posts that I hope will help me get a better understanding.
Some thoughts:
I won't go through this whole post but I'll pick out a few representative bits to reply to.
This statement expresses a high degree of confidence in a claim that has, as far as I can tell, zero supporting evidence. I would strongly bet against the prediction that LLMs will never be able to originate an explanatory theory.
We still don't know how humans create language, or prove mathematical conjectures, or manipulate objects in physical space, and yet we created AIs that can do those things.
I am not aware of any such insight? This claim seems easily falsified by the existence of superforecasters.
And: if prediction is impossible in principle, then you can't confidently say that ASI won't kill everyone, therefore you should regard it as potentially dangerous. But you seem to be quite confident that you know what ASI will be like.
https://www.lesswrong.com/w/orthogonality-thesis