Ben Stewart

1932 karmaJoined London, UK

Bio

I'm an AI Program Officer at Longview Philanthropy, though all views I express here are my own. 

Before Longview I was a researcher at Open Philanthropy and a Charity Entrepreneurship incubatee.

Comments
236

Thanks for this. Post-hoc theorizing:

‘doing good better’ calls to mind comparisons to the reader’s typical ideas about doing good. It implicitly criticizes those examples, which is a negative experience for the reader and could cause defensiveness. 
 

‘Do the most good’ makes the reader attempt to imagine what that could be, which is a positive and interesting question, and doesn’t immediately challenge the reader’s typical ideas about doing good. 

It wouldn’t have been obvious to me before the fact whether the above stuff wouldn’t be outweighed by worries about reactions to ‘the most good’ or what have you, so I appreciate you gathering empirical evidence here.

"Given leadership literature is rife with stories of rejected individuals going on to become great leaders"

The selection effect can be very misleading here — in that literature you usually don't hear from all the individuals who were selected and failed, nor those who were rejected correctly and would have failed, and so on. Lots of advice from the start-up/business sector is super sus for this exact reason.

I would wait for METR's actual evaluation —  '30 hours' is just based on claims of continued effort, not actual successful performance on carefully measured tasks. 

I think it's possible to gain the efficiency of using LLM assistance without sacrificing style/tone — it just requires taste and more careful prompting/context, which seems worth it for a job ad. Maybe it works for their intended audience, but puts me off.

What can I read to understand the current and near-term state of drone warfare, especially (semi-)autonomous systems? 

I'm looking for an overview of the developments in recent years, and what near-term systems are looking like. I've been enjoying Paul Scharre's 'Army of None', but given it was published in 2018 it's well behind the curve. Thanks!

I don't know. My guess is that they give very slim odds to the Trump admin caring about carbon neutrality, and think that the benefit of including a mention in their submission to be close to zero (other than demonstrating resolve in their principles to others). 

On the minus side, such a mention risks a reaction with significant cost to their AI safety/security asks. So overall, I can see them thinking that including a mention does not make sense for their strategy. I'm not endorsing that calculus, just conjecturing.

Object-level aside, I suspect they’re aware their audience is the hypersensitive-to-appearances Trump admin, and framing things accordingly. Even basic, common sense points regarding climate change could have a significant cost to the doc’s reception.

One might distinguish de jure openness ("We let everyone in!") with de facto ("We attract X subgroups, and repel Y!"). The homogeneity and narrowness of the recent conference might suggest the former approach has not been successful at intellectual openness.

The point wasn’t to motivate intuitions on the broader issue, but demonstrate that exclusionary beliefs could be a coherent concept. I agree your version is better for motivating broader intuitions

Thanks. Given Alice has committed no crime, and everything else about her is 'normal', I think organizers would need to point to her belief to justify uninviting or banning her. That would suggest that an individual's beliefs can (in at least one case) justify restricting their participation, on the basis of how that belief concerns other (prospective) attendees.

Load more