T

trammell

1933 karmaJoined

Bio

Postdoc at the Digital Economy Lab, Stanford, and research affiliate at the Global Priorities Institute, Oxford. I'm slightly less ignorant about economic theory than about everything else.

https://philiptrammell.com/

Comments
157

Depends how much it costs to lengthen life, and how much more the second added century costs than the first, and what people’s discount rates are… but yes, agreed that allowing for increased lifespan is one way the marginal utility of consumption could really rise!

Hello, thank you for your interest!

Students from other countries can indeed apply. The course itself will be free of charge for anyone accepted.

We also hope to offer some or all attendees room, board, and transportation reimbursement, but how many people will be offered this support, and to what extent, will depend on the funding we receive and on the number, quality, and geographic dispersion of the applicants. When decisions are sent out, we'll also notify those accepted about what support they are offered.

I think this is a good point, predictably enough--I touch on it in my comment on C/H/M's original post--but thanks for elaborating on it!

For what it's worth, I would say that historically, it seems to me that the introduction of new goods has significantly mitigated but not overturned the tendency for consumption increases to lower the marginal utility of consumption. So my central guess is (a) that in the event of a growth acceleration (AI-induced or otherwise), the marginal utility of consumption would in fact fall, and more relevantly (b) that most investors anticipating an AI-induced acceleration to their own consumption growth would expect their marginal utility of consumption to fall. So I think this point identifies a weakness in the argument of the paper/post (as originally written; they now caveat it with this point)--a reason why you can't literally infer investors' beliefs about AGI purely from interest rates--but doesn't in isolation refute the point that a low interest rate is evidence that most investors don't anticipate AGI soon.

Thanks! No—I’ve spoken with them a little bit about their content but otherwise they were put together independently. Theirs is remote, consists mainly of readings and discussions, and is meant to be at least somewhat more broadly accessible; ours is in person at Stanford, consists mainly of lectures, and is meant mainly for econ grad students and people with similar backgrounds. 

Okay great, good to know. Again, my hope here is to present the logic of risk compensation in a way that makes it easy to make up your mind about how you think it applies in some domain, not to argue that it does apply in any domain. (And certainly not to argue that a model stripped down to the point that the only effect going on is a risk compensation effect is a realistic model of any domain!)

As for the role of preference-differences in the AI risk case—if what you’re saying is that there’s no difference at all between capabilities researchers’ and safety researchers’ preferences (rather than just that the distributions overlap), that’s not my own intuition at all. I would think that if I learn

  • that two people have similar transhuamanist-ey preferences except that one discounts the distant future (or future generations), and so cares primarily about achieving amazing outcomes in the next few decades for people alive today, whereas the other cares primarily about the “expected value of the lightcone”; and
  • that one works on AI capabilities and the other works on AI safety,

my guess about who was who would be a fair bit better than random.

But I absolutely agree that epistemic disagreement is another reason, and could well be a bigger reason, why different people put different values on safety work relative to capabilities work. I say a few words about how this does / doesn’t change the basic logic of risk compensation in the section on "misperceptions": nothing much seems to change if the parties just disagree in a proportional way about the magnitude of the risk at any given levels of C and S--though this disagreement can change who prioritizes which kind of work, it doesn’t change how the risk compensation interaction plays out. What really changes things there is if the parties disagree about the effectiveness of marginal increases to S, or really, if they disagree about the extent to which increases to S decrease the extent to which increases to lower P.

In any event though, if what you’re saying is that a framing more applicable to the AI risk context would have made the epistemic diagreement bit central and the preference disagreement secondary (or swept under the rug entirely), fair enough! Look forward to seeing that presentation of it all if someone writes it up.

My understanding is that the consumption of essentially all animal products seems to increase in income at the country level across the observed range, whether or not you control for various things. See the regression table on slide 7 and the graph of "implied elasticity on income" on slide 8 here.

I'm not seeing the paper itself online anywhere, but maybe reach out to Gustav if you're interested.

Thank you!

And thanks for the IIT / Pautz reference, that does seem relevant. Especially to my comment on the "superlinearity" intuition that experience should probably be lost, or at least not gained, as the brain is "disintegrated" via corpus callosotomy... let me know (you or anyone else reading this) if you know whether IIT, or some reasonable precisification of it, says that the "amount" of experience associated with two split brain hemispheres is more or less than with an intact brain.

Thanks for noting this possibility--I think it's the same, or at least very similar, to an intuition Luisa Rodriguez had when we were chatting about this the other day actually. To paraphrase the idea there, even if we have a phenomenal field that's analogous to our field of vision and one being's can be bigger than another's, attention may be sort of like a spotlight that is smaller than the field. Inflicting pains on parts of the body lower welfare up to a point, like adding red dots to a wall in our field of vision with a spotlight on it adds redness to our field of vision, but once the area under the spotlight is full, not much (perhaps not any) more redness is perceived by adding red dots to the shadowy wall outside the spotlight. If in the human case the spotlight is smaller than "the whole body except for one arm", then it is about equally bad to put the amputee and the non-amputee in an ice bath, or for that matter to put all but one arm of a non-amputee and the whole of a non-amputee in an ice bath.

Something like this seems like a reasonable possibility to me as well. It still doesn't seem as intuitive to me as the idea that, to continue the metaphor, the spotlight lights the whole field of vision to some extent, even if some parts are brighter than others at any given moment; if all of me except one arm were in an ice bath, I don't think I'd be close to indifferent about putting the last arm in. But it does seem hard to be sure about these things.

Even if "scope of attention" is the thing that really matters in the way I'm proposing "size" does, though, I think most of what I'm suggesting in this post can be maintained, since presumably "scope" can't be bigger than "size", and both can in principle vary across species. And as for how either of those variables scales with neuron count, I get that there are intuitions in both directions, but I think the intuitions I put down on the side of superlinearity apply similarly to "scope".

Glad to see you found my post thought-provoking, but let me emphasize that my own understanding is also partial at best, to put it mildly!

Ah wait, did your first comment always say “similar”? No worries if not (I often edit stuff just after posting!) but if so, I must have missed it--apologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.

But they do seem like significantly different hypotheses to me. The reason is that it seems like the arguments presented against many experiences in a single brain can convince me that there is probably (something like) a single, highly "integrative" field of hedonic intensities, just as I don't doubt that there is a single visual processing system behind my single visual field, and yet leave me fully convinced that both fields can come in different sizes, so that one brain can have higher welfare capacity than another for size reasons.

Thanks for the second comment though! It's interesting, and to my mind more directly relevant, in that it offers reasons to doubt the idea that hedonic intensities are spread across locations at all. They move me a bit, but I'm still mostly left thinking
- Re 1, we don't need to appeal to scientific evidence about whether it’s possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If that's somehow an illusion, it's the illusion that needs a lot of scientific evidence to debunk.
- Re 2, it's not clear why we would have evolved to create valence (or experience) at all in the first place, so in some sense the fact that it would evidently be more efficient to have less of it doesn't help here. But assuming that valence evolved to motivate us in adaptive ways, it doesn't seem like such a stretch to me to say that forming the feeling "my hand is on fire and it in particular hurts" shapes our motivations in the right direction more effectively than forming the feeling "my hand is on fire and I've just started feeling bad overall for some reason", and that this is worth whatever costs come with producing a field of valences.
- Re 3, the proposal I call (iii*) and try to defend is "that the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal field" (emphasis added). I put in the "incorporates" because I don't mean to take a stand on whether there are also things that contribute to welfare that don't correspond to particular locations in the phenomenal field, like perhaps the social pains you mention. I just find it hard to deny from first-hand experience that there are some "location-dependent" pains; and if so, I would think that these can scale with "size".

Load more