T

trammell

1528 karmaJoined

Bio

Econ PhD student at Oxford and research associate at the Global Priorities Institute. I'm slightly less ignorant about economic theory than about everything else.

https://philiptrammell.com/

Comments
130

Whoops, thanks! Issues importing from the Google doc… fixing now.

Good to hear, thanks!

I‘ve just edited the intro to say: it’s not obvious to me one way or the other whether it's a big deal in the AI risk case. I don't think I know much about the AI risk case (or any other case) to have much of an opinion, and I certainly don't think anything here is specific enough to come to a conclusion in any case. My hope is just that something here makes it easier to for people who do know about particular cases to get started thinking through the problem.

If I have to make a guess about the AI risk case, I'd emphasize my conjecture near the end, just before the "takeaways" section, namely that (as you suggest) there currently isn't a ton of restraint, so (b) mostly fails, but that this has a good chance of changing in the future:

Today, while even the most advanced AI systems are neither very capable nor very dangerous, safety concerns are not constraining  much below . If technological advances unlock the ability to develop systems which offer utopia if their deployment is successful, but which pose large risks, then the developer’s choice of  at any given  is more likely to be far below , and the risk compensation induced by increasing  is therefore more likely to be strong.

If lots/most of AI safety work (beyond evals) is currently acting more "like evals" than like pure "increases to S", great to hear--concern about risk compensation can just be an argument for making sure it stays that way!

Thanks for noting this. If in some case there is a positive level of capabilities for which P is 1, then we can just say that the level of capabilities denoted by C = 0 is the maximum level at which P is still 1. What will sort of change is that the constraint will be not C ≥ 0 but C ≥ (something negative), but that doesn't really matter since here you'll never want to set C<0 anyway. I've added a note to clarify this.

Maybe a thought here is that, since there is some stretch of capabilities along which P=1, we should think that P(.) is horizontal around C=0 (the point at which P can start falling from 1) for any given S, and that this might produce very different results from the  example in which there would be a kink at C=0. But no--the key point is whether increases to S change the curve in a way that widens as C moves to the right, and so "act as price decreases to C", not the slope of the curve around C=0. E.g. if  (for , and 0 above), then in the k=0 case where the lab is trying to maximize , they set , and so P is again fixed (here, at 2/3) regardless of S.

Hey David, I've just finished a rewrite of the paper which I'm hoping to submit soon, which I hope does a decent job of both simplifying it and making clearer what the applications and limitations are: https://philiptrammell.com/static/Existential_Risk_and_Growth.pdf

Presumably the referees will constitute experts on the growth front at least (if it's not desk rejected everywhere!), though the new version is general enough that it doesn't really rely on any particular claims about growth theory.

Hold on, just to try wrapping up the first point--if by "flat" you meant "more concave", why do you say "I don't see how [uncertainty] could flatten out the utility function. This should be in "Justifying a more cautious portfolio"?"

Did you mean in the original comment to say that you don't see how uncertainty could make the utility function more concave, and that it should therefore also be filed under "Justifying a riskier portfolio"?

I can't speak for Michael of course, but as covered throughout the post, I think that the existing EA writing on this topic has internalized the pro-risk-tolerance points (e.g. that some other funding will be coming from uncorrelated sources) quite a bit more than the anti-risk-tolerance points (e.g. that some of the reasons that many investors seem to value safe investments so much, like "habit formation", could apply to philanthropists to some extent as well). If you feel you and some other EAs have already internalized the latter more than the former, then that's great too, as far as I'm concerned--hopefully we can come closer to consensus about what the valid considerations are, even if from different directions.

By flattening here, I meant "less concave" - hence more risk averse. I think we agree on this point?

Less concave = more risk tolerant, no?

I think I'm still confused about your response on the second point too. The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don't know if it's more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?

Thanks! As others have commented, the strength of this consideration (and of many of the other considerations) is quite ambiguous, and I’d love to see more research on it. But at least qualitatively, I think it’s been underappreciated by existing discussion.

Thanks! Hardly the first version of an article like this (or most clearly written), but hopefully a bit more thorough…!

I agree! As noted under Richard’s comment, I’m afraid my only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work. (And I won’t be free again for a while…)

If you or anyone else reading this manages to write one in the meantime, send it over and I’ll stick it at the top.

Thanks! I agree that would be helpful. My only excuse is that the points covered are scattered enough that writing a short, accessible summary at the top was a bit of a pain, and I ran out of time to write this before I could make it work…

Load more