Postdoc at the Digital Economy Lab, Stanford. I'm slightly less ignorant about economic theory than about everything else.
Okay, interesting--that's baking in an EAish (or at least consequentialist) framing that I was trying to cut out by just saying "most moral", but fair point that maybe EAs just use the word "moral" next to "jobs" unusually often and that outweighs this.
In any case, yes, as Linch has pointed out, it seems these effects are small--trying your prompts now, they seem to produce answers about as EA-coded as the "morally speaking" one.
Definitely possible for the job prompt--do you have any thoughts on how else to ask the question about "best jobs" in a way that makes it clear that we mean "best" in the moral sense?
(Again I did try varying the prompt a bit and the results seemed similar, but I always used the word "moral". I don't want to say something like "I don't mean best for me, I mean best for the world", since that's asking for a consequentialist answer.)
I just asked Opus 4.5 the same prompt here. Unlike Gemini 2 months ago, it got the two views right, but was much more clearly just anchoring on the logic of this EA Forum post.
Its central estimate is that it's just about as bad to put a split-brain patient in an ice bath than a person with a working corpus callosum (given that the former does have 2 experience streams), as I'd say. But it does also give the "Fischer view" some weight, for an expected welfare multiple of "approximately 1.1-1.3x".
There are many arguments one can make for spending more or less quickly, and that's fine, but since this post doesn’t respond to my own argument in any sense, I’ll just flag that you can find it here, if anyone’s still interested!
The core of the argument is in Section 2. The core assumption it relies on is that our beneficiaries have a positive rate of pure time preference and/or imperfect intergenerational altruism. So the argument is essentially a reply to the “rational preference” argument presented here: I’d say we should do what’s best for people and their descendants, which is to be more patient than they prefer. If it’s true that it’s cheaper to save a life in some country today than in 100 years, in present value terms, that is a case of the inefficiency discussed in Section 2.6.
The argument is entirely compatible with
Thanks for emphasizing this, it is definitely a challenge here.
Continuing the half-baked science, I just asked my mom--who's unusually charitable, but mainly to local and/or explicitly Catholic charities and by no means "an EA"--to ask ChatGPT/Claude/Gemini, in her own words, where they would give money if they had any. (In all cases it's the free version.)
The prompt she wrote was "[model name], if you had some money to give away, what would you do with it?". This is similar to my "If you had some money to give away, where would you give it?" of course. My guess is that this is mainly because something like this is just the most natural way to ask the question, but open to hearing other prompt suggestions.
The responses still display EA influence, but they're clearly less EA-coded than the answers I/Linch/anormative got. ChatGPT gets a "1", Claude gets a "2", and Gemini gets a "0". I've added the answers to a new tab of the doc here.
Looking into it,