Arthur Malone

EAGx Coordinator @ CEA
644 karmaJoined Working (6-15 years)



Arthur has been engaged with EA since before the movement settled on a name, and reoriented from academics/medicine toward supporting highly impactful work. He has since developed operations skills by working with EA-affiliated organizations and in the private sector. Alongside EA interests, Arthur finds inspiration in nerdy art about heroes trying to save the universe.


I'm extremely excited that EAGxIndia 2024 is confirmed for October 19–20 in Bengaluru! The team will post a full forum post with more details in the coming days, but I wanted a quick note to get out immediately so people can begin considering travel plans. You can sign up to be notified about admissions opening, or to express interest in presenting, via the forms linked on the event page:

Hope to see many of you there!!

I'm ambivalent about jargon; strongly pro when it seems sufficiently useful, but opposed to superfluous usage. One benefit I can see for MEARO is that it isn't nominatively restricted to community building like most "local EA groups." 

I recently attended a talk at EAGxLatAm by Doebem, a Brazilian based and locally focused equivalent of GiveWell, that made a decent case for the application of EA principles to "think global, act local." Their work is very distinct from EA Brazil, but it falls solidly into regional and meta EA, and I think there is strong potential for other similar orgs that would work tightly with local CB groups but have different focus.

Thanks for the kind words!

To address the nit: Before changing it to "impossible-to-optimize variables," I had "things where it is impossible to please everyone." I think that claim is straightforwardly true, and maybe I should have left it there, but it doesn't seem to communicate everything I was going for. It's not just that attendees come in with mutually exclusive preferences, but from the organizers perspective it is practically impossible to chase optimality. We don't have control over everything in presenters' talks, and don't have intimate knowledge of every attendees' preferences, so complaints are, IMHO, inevitable (and that's what I wanted to communicate to future organizers).

That said, I think we could have done somewhat better with our content list, mostly via getting feedback from applicants earlier so we could try to match cause-area supply and demand. For content depth, we aimed for some spread but for the majority of talks to be clustered on the medium-to-high side of EA familiarity (i.e. if a "1" was "accessible to anyone even if they've never heard of EA" and "10" was "only useful to a handful of professional EA domain experts," then we aimed for a distribution centered around 7. We only included talks at the low end if we considered them uniquely useful, like a "How to avoid burnout" talk that, while being geared towards EAs, did not require lots of EA context).

I think, given that we selected for attendees with demonstrated EA activity, that this heuristic was pretty solid. Nothing in the feedback data would have me change it for the next go-around or advise other organizers to use a different protocol (unless, of course, they were aiming for a different sort of audience). But I'm happy for anyone to offer suggestions for improvement!

I really appreciate and agree with "trying to be thoughtful at all" and "directionally correct," as the target group to be nudged is those who see a deadline and wait until the end of the window (to look at it charitably, maybe they don't know that there's a difference in when they apply. So we're just bringing it to their attention.)

We appreciate that there are genuine cases where people are unsure. I think in your case, the right move would've been to apply with that annotation; you likely would have been accepted and then been able to register as soon as you were sure.

I am all for efforts to do AIS movement building distinct from EA movement building by people who are convinced by AIS reasoning and not swayed by EA principles. There's all kinds of discussion about AIS in academic/professional/media circles that never reference EA at all. And while I'd love for everyone involved to learn about and embrace EA, I'm not expecting that. So I'm just glad they're doing their thing and hope they're doing it well.

I could probably have asked the question better and made it, "what should EAs do (if anything), in practice to implement a separate AIS movement?" Because then it sounds like we're talking about making a choice to divert movement building dollars and hours away from EA movement building to distinct AI safety movement building, under the theoretical guise of trying to bolster the EA movement against getting eaten by AIS? Seems obviously backwards to me. I think EA movement building is already under-resourced, and owning our relationship with AIS is the best strategic choice to achieve broad EA goals and AIS goals.

As someone who is extremely pro investing in big-tent EA, my question is, "what does it look like, in practice, to implement 'AI safety...should have its own movement, separate from EA'?"

I do think it is extremely important to maintain EA as a movement centered on the general idea of doing as much good as we can with limited resources. There is serious risk of AIS eating EA, but the answer to that cannot be to carve AIS out of EA. If people come to prioritize AIS from EA principles, as I do, I think it would be anathema to the movement to try to push their actions and movement building outside the EA umbrella. In addition, EA being ahead of the curve on AIS is, in my opinion, a fact to embrace and treat as evidence of the value of EA principles, individuals, and movement building methodology.

To avoid AIS eating EA, we have to keep reinvesting in EA fundamentals. I am so grateful and impressed that Dave published this post, because it's exactly the kind of effort that I think is necessary to keep EA EA. I think he highlights specific failures in exploiting known methods of inducing epistemic ... untetheredness? 

For example, I worked with CFAR where the workshops deliberately employed the same intensive atmosphere to get people to be receptive to new ways of thinking and being actually open to changing their minds. I recognized that this was inherently risky, and was always impressed that the ideas introduced in this state were always about how to think better rather than convince workshop participants of any conclusion. Despite many of the staff and mentors being extremely convinced of the necessity of x-risk mitigation, I never once encountered discussion of how the rationality techniques should be applied to AIS. 

To hear that this type of environment is de facto being used to sway people towards a cause prioritization, rather than how to do cause prio makes me update significantly away from continuing the university pipeline as it currently exists. The comments on the funding situation are also new to me and seem to represent obvious errors. Thanks again Dave for opening my eyes to what's currently happening.

As the primary author who looked for citations, I want to flag that while I think it is great to cite sources and provide quantitative evidence when possible, I have a general wariness about including the kinds of links and numbers I chose here when trying to write persuasive content.

Even if one tries to find true and balanced sources, the nature of searching for answers to questions like “What percentage of US philanthropic capital flows through New York City based institutions?” or “How many tech workers are based in the NYC-metro area compared to other similar cities?” is likely to return a skewed set of results. Where possible, I tried to find sources and articles that were about a particular topic and just included NYC among all relevant cities over sources that were about NYC.

Unfortunately, in some cases the only place I could find relevant data was in a piece trying to make a story about NYC. I think this is bad because of incentives to massage or selectively choose statistics to sell stories. You can find a preponderance of news stories selling the idea that “X city is taking over the Bay as the new tech hub” catering to the local audience in X, so the existence of such an article is poor evidence that X is actually the important, up-and-coming, tech hub. That said, if X actually was a place with a reasonable claim to being the important, up-and-coming, tech hub, you would expect to see those same articles, so the weak evidence is still in favor. 

I am trying to balance the two conflicting principles of “it is good to include evidence” and “it is difficult to tell what is good evidence when searching for support for a claim” by including this disclaimer. The fundamental case made in the sequence is primarily based on local knowledge and on dozens-to-hundreds of conversations I’ve had after spending many years in both the Bay and NYC EA communities, not on the relatively-quickly sourced links I included here to try to help communicate the case to those without the direct experience.

That is true, and the post has been edited in response. Thanks!

I think (given the username of the poster and some acquaintance with those who prompted this post) that it would take the efforts of many interpretability researchers to even guess as to whether there was serious intent, humorous intent, or any intent at all behind the writing of this post. 

I am absolutely enamorhorrified by the earnestness on display here. Please, please continue with your research to make sure your work stays aligned with our principles. Actually, maybe just take a nap. A really really long nap.

Load more