alex lawsen (previously alexrjl)

@ Open Philanthropy
3541 karmaJoined Nov 2018


I work on AI Governance at Open Philanthropy. Comments here are posted in a personal capacity.


Sorted by New


I'm still saving for retirement in various ways, including by making pension contributions.

If you're working on GCR reduction, you can always consider your pension savings a performance bonus for good work :)

I'm not officially part of the AMA but I'm one of the disagreevotes so I'll chime in.

As someone who's only recently started, the vibe this post gives of it being hard for me to disagree with established wisdom and/or push the org to do things differently, meaning my only role is to 'just push out more money along the OP party line', is just miles away from what I've experienced.

If anything, I think how much ownership I've needed to take for the projects I'm working on has been the biggest challenge of starting the role. It's one that (I hope) I'm rising to, but it's hard!

In terms of how open OP is to steering from within, it seems worth distinguishing 'how likely is a random junior person to substantially shift the worldview of the org', and 'what would the experience of that person be like if they tried to'. Luke has, from before I had an offer, repeatedly demonstrated that he wants and values my disagreement in how he reacts to it and acts on it, and it's something I really appreciate about his management. 

I think 1 unfortunately ends up not being true in the intensive farming case. Lots of things are spread by close enough contact that even intense uvc wouldn't do much (and it would be really expensive)

I wouldn't expect the attitude of the team to have shifted much in my absence. I learned a huge amount from Michelle, who's still leading the team, especially about management. To the extent you were impressed with my answers, I think she should take a large amount of the credit.

On feedback specifically, I've retained a small (voluntary) advisory role at 80k, and continue to give feedback as part of that, though I also think that the advisors have been deliberately giving more to each other.

The work I mentioned on how we make introductions to others and track the effects of those, including collaborating with CH, was passed on to someone else a couple of months before I left, and in my view the robustness of those processes has improved substantially as a result.

This seems extremely uncharitable. It's impossible for every good thing to be the top priority, and I really dislike the rhetorical move of criticising someone who says their top priority is X for not caring at all about Y. 

In the post you're replying to Chana makes the (in my view) virtuous move of actually being transparent about what CH's top priorities are, a move which I think is unfortunately rare because of dynamics like this. You've chosen to interpret this as 'a decision not to have' [other nice things that you want], apparently realised that it's possible the thinking here isn't actually extremely shallow, but then dismissed the possibility of anyone on the team being capable of non-shallow thinking anyway for currently unspecified reasons.


editing this in rather than continuing a thread as I don't feel able to do protracted discussion at the moment:

  • Chana is a friend. We haven't talked about this post, but that's going to be affecting my thinking.
  • She's also, in my view (which you can discount if you like), unusually capable of deep thinking about difficult tradeoffs, which made the comment expressing skepticism about CH's depth particularly grating.
  • More generally, I've seen several people I consider friends recently put substantial effort into publicly communicating their reasoning about difficult decisions, and be rewarded for this effort with unhelpful criticism.
  • All that is to say that I'm probably not best placed to impartially evaluate comments like this, but at the end of the day I re-read it and it still feels like what happened is someone responded to Chana saying "our top priority is X" with "it seems possible that Y might be good", and I called that uncharitable because I'm really, really sure that that possibility has not escaped her notice.


I'm fairly disappointed with how much discussion I've seen recently that either doesn't bother to engage with ways in which the poster might be wrong, or only engages with weak versions. It's possible that the "debate" format of the last week has made this worse, though not all of the things I've seen were directly part of that.

I think that not engaging at all, and merely presenting one side while saying that's what you're doing, seems better than presenting and responding to counterarguments (but only the weak ones), which still seems better than strawmanning arguments that someone else has presented.

Thank you for all of your work organizing the event, communicating about it, and answering people's questions. None of these seem like easy tasks!

I'm no longer on the team but my hot take here is that a good bet is just going to be trying really hard to work out which tools you can use to accelerate/automate/improve your work. This interview with Riley Goodside might be interesting to listen to, not only for tips on how to get more out of AI tools, but also to hear about how the work he does in prompting those tools has rapidly changed, but that he's stayed on the frontier because the things he learned have transferred.

Hey, it's not a direct answer but various parts of my recent discussion with Luisa cover aspects of this concern (it's one that frequently came up in some form or other when I was advising), in particular, I'd recommend skimming the sections on 'trying to have an impact right now', 'needing to work on AI immediately', and 'ignoring conventional career wisdom'.

It's not a full answer but I think the section of my discussion with Luisa Rodriguez on 'not trying hard enough to fail' might be interesting to read/listen to if you're wondering about this. 

Load more