T

trammell

2558 karmaJoined

Bio

Postdoc at the Digital Economy Lab, Stanford. I'm slightly less ignorant about economic theory than about everything else.

https://philiptrammell.com/

Sequences
1

The Ambiguous Economics of Full Automation

Comments
199

Thanks for emphasizing this, it is definitely a challenge here.

Continuing the half-baked science, I just asked my mom--who's unusually charitable, but mainly to local and/or explicitly Catholic charities and by no means "an EA"--to ask ChatGPT/Claude/Gemini, in her own words, where they would give money if they had any. (In all cases it's the free version.)

The prompt she wrote was "[model name], if you had some money to give away, what would you do with it?". This is similar to my "If you had some money to give away, where would you give it?" of course. My guess is that this is mainly because something like this is just the most natural way to ask the question, but open to hearing other prompt suggestions.

The responses still display EA influence, but they're clearly less EA-coded than the answers I/Linch/anormative got. ChatGPT gets a "1", Claude gets a "2", and Gemini gets a "0". I've added the answers to a new tab of the doc here.

Looking into it,

  • Most of the difference seems to be driven by the fact that she was using the free version of ChatGPT, whereas I only tested thinking/extended versions (since we both got very EA answers from Claude and very non-EA answers from Gemini Fast).
  • ...But part of the difference is also definitely driven by the prompt. When I log in and use a temporary chat, but turn on thinking/extended, I also get noticeably less EA answers than with my prompt. Playing around with the language, both the shift from "where would you give it" to "what would you do with it" and the inclusion of "ChatGPT, ..." seem to make some difference.
  • Consistent with anormative's OpenRouter check, none of the difference seems to be driven by using a temporary chat as opposed to not logging in. When I log in, use temporary chat, use Instant, and use her prompt, I get answers almost identical to hers.

Okay, interesting--that's baking in an EAish (or at least consequentialist) framing that I was trying to cut out by just saying "most moral", but fair point that maybe EAs just use the word "moral" next to "jobs" unusually often and that outweighs this.

In any case, yes, as Linch has pointed out, it seems these effects are small--trying your prompts now, they seem to produce answers about as EA-coded as the "morally speaking" one.

Oh shoot, that's good to know!! Thank you!

And thank you for doing the OpenRouter validation!

Definitely possible for the job prompt--do you have any thoughts on how else to ask the question about "best jobs" in a way that makes it clear that we mean "best" in the moral sense?

(Again I did try varying the prompt a bit and the results seemed similar, but I always used the word "moral". I don't want to say something like "I don't mean best for me, I mean best for the world", since that's asking for a consequentialist answer.)

I just asked Opus 4.5 the same prompt here. Unlike Gemini 2 months ago, it got the two views right, but was much more clearly just anchoring on the logic of this EA Forum post.

Its central estimate is that it's just about as bad to put a split-brain patient in an ice bath than a person with a working corpus callosum (given that the former does have 2 experience streams), as I'd say. But it does also give the "Fischer view" some weight, for an expected welfare multiple of "approximately 1.1-1.3x".

Cool, thanks for sharing! Agreed that this would be great to lower uncertainty on (not that I have any idea how to do it...)

I do my best at a lot of that speculating in the linked doc, which is why it’s so long, and end up thinking that those considerations probably don’t outweigh the (to my mind) central point about pure time preference and imperfect intergenerational altruism. But they might. 

There are many arguments one can make for spending more or less quickly, and that's fine, but since this post doesn’t respond to my own argument in any sense, I’ll just flag that you can find it here, if anyone’s still interested!

The core of the argument is in Section 2. The core assumption it relies on is that our beneficiaries have a positive rate of pure time preference and/or imperfect intergenerational altruism. So the argument is essentially a reply to the “rational preference” argument presented here: I’d say we should do what’s best for people and their descendants, which is to be more patient than they prefer. If it’s true that it’s cheaper to save a life in some country today than in 100 years, in present value terms, that is a case of the inefficiency discussed in Section 2.6.

The argument is entirely compatible with

  • there being a significant risk of expropriation each year,
  • r being less than sometimes, and
  • it being better to give now than to wait 100 years in particular. (Since the argument just implies that, given a positive rate of pure time preference and/or imperfect intergenerational altruism, there is probably some future time when it is better to give, until a large share of total funding for the beneficiaries is being allocated patiently.)

Thanks, I agree that when to spend remains an important and non-obvious question! I'm glad to see people engaging with it again, and I think a separate post is the place for that. I'll check it out in the next few days.

Load more