Arthur Malone

Head of Special Projects @ EA NYC
601 karmaJoined Jun 2022Working (6-15 years)

Bio

Participation
2

Arthur has been engaged with EA since before the movement settled on a name, and reoriented from academics/medicine toward supporting highly impactful work. He has since developed operations skills by working with EA-affiliated organizations and in the private sector. Alongside EA interests, Arthur finds inspiration in nerdy art about heroes trying to save the universe.

Comments
10

Thanks for the kind words!

To address the nit: Before changing it to "impossible-to-optimize variables," I had "things where it is impossible to please everyone." I think that claim is straightforwardly true, and maybe I should have left it there, but it doesn't seem to communicate everything I was going for. It's not just that attendees come in with mutually exclusive preferences, but from the organizers perspective it is practically impossible to chase optimality. We don't have control over everything in presenters' talks, and don't have intimate knowledge of every attendees' preferences, so complaints are, IMHO, inevitable (and that's what I wanted to communicate to future organizers).

That said, I think we could have done somewhat better with our content list, mostly via getting feedback from applicants earlier so we could try to match cause-area supply and demand. For content depth, we aimed for some spread but for the majority of talks to be clustered on the medium-to-high side of EA familiarity (i.e. if a "1" was "accessible to anyone even if they've never heard of EA" and "10" was "only useful to a handful of professional EA domain experts," then we aimed for a distribution centered around 7. We only included talks at the low end if we considered them uniquely useful, like a "How to avoid burnout" talk that, while being geared towards EAs, did not require lots of EA context).

I think, given that we selected for attendees with demonstrated EA activity, that this heuristic was pretty solid. Nothing in the feedback data would have me change it for the next go-around or advise other organizers to use a different protocol (unless, of course, they were aiming for a different sort of audience). But I'm happy for anyone to offer suggestions for improvement!

I really appreciate and agree with "trying to be thoughtful at all" and "directionally correct," as the target group to be nudged is those who see a deadline and wait until the end of the window (to look at it charitably, maybe they don't know that there's a difference in when they apply. So we're just bringing it to their attention.)

We appreciate that there are genuine cases where people are unsure. I think in your case, the right move would've been to apply with that annotation; you likely would have been accepted and then been able to register as soon as you were sure.

I am all for efforts to do AIS movement building distinct from EA movement building by people who are convinced by AIS reasoning and not swayed by EA principles. There's all kinds of discussion about AIS in academic/professional/media circles that never reference EA at all. And while I'd love for everyone involved to learn about and embrace EA, I'm not expecting that. So I'm just glad they're doing their thing and hope they're doing it well.

I could probably have asked the question better and made it, "what should EAs do (if anything), in practice to implement a separate AIS movement?" Because then it sounds like we're talking about making a choice to divert movement building dollars and hours away from EA movement building to distinct AI safety movement building, under the theoretical guise of trying to bolster the EA movement against getting eaten by AIS? Seems obviously backwards to me. I think EA movement building is already under-resourced, and owning our relationship with AIS is the best strategic choice to achieve broad EA goals and AIS goals.

As someone who is extremely pro investing in big-tent EA, my question is, "what does it look like, in practice, to implement 'AI safety...should have its own movement, separate from EA'?"

I do think it is extremely important to maintain EA as a movement centered on the general idea of doing as much good as we can with limited resources. There is serious risk of AIS eating EA, but the answer to that cannot be to carve AIS out of EA. If people come to prioritize AIS from EA principles, as I do, I think it would be anathema to the movement to try to push their actions and movement building outside the EA umbrella. In addition, EA being ahead of the curve on AIS is, in my opinion, a fact to embrace and treat as evidence of the value of EA principles, individuals, and movement building methodology.

To avoid AIS eating EA, we have to keep reinvesting in EA fundamentals. I am so grateful and impressed that Dave published this post, because it's exactly the kind of effort that I think is necessary to keep EA EA. I think he highlights specific failures in exploiting known methods of inducing epistemic ... untetheredness? 

For example, I worked with CFAR where the workshops deliberately employed the same intensive atmosphere to get people to be receptive to new ways of thinking and being actually open to changing their minds. I recognized that this was inherently risky, and was always impressed that the ideas introduced in this state were always about how to think better rather than convince workshop participants of any conclusion. Despite many of the staff and mentors being extremely convinced of the necessity of x-risk mitigation, I never once encountered discussion of how the rationality techniques should be applied to AIS. 

To hear that this type of environment is de facto being used to sway people towards a cause prioritization, rather than how to do cause prio makes me update significantly away from continuing the university pipeline as it currently exists. The comments on the funding situation are also new to me and seem to represent obvious errors. Thanks again Dave for opening my eyes to what's currently happening.

As the primary author who looked for citations, I want to flag that while I think it is great to cite sources and provide quantitative evidence when possible, I have a general wariness about including the kinds of links and numbers I chose here when trying to write persuasive content.

Even if one tries to find true and balanced sources, the nature of searching for answers to questions like “What percentage of US philanthropic capital flows through New York City based institutions?” or “How many tech workers are based in the NYC-metro area compared to other similar cities?” is likely to return a skewed set of results. Where possible, I tried to find sources and articles that were about a particular topic and just included NYC among all relevant cities over sources that were about NYC.

Unfortunately, in some cases the only place I could find relevant data was in a piece trying to make a story about NYC. I think this is bad because of incentives to massage or selectively choose statistics to sell stories. You can find a preponderance of news stories selling the idea that “X city is taking over the Bay as the new tech hub” catering to the local audience in X, so the existence of such an article is poor evidence that X is actually the important, up-and-coming, tech hub. That said, if X actually was a place with a reasonable claim to being the important, up-and-coming, tech hub, you would expect to see those same articles, so the weak evidence is still in favor. 

I am trying to balance the two conflicting principles of “it is good to include evidence” and “it is difficult to tell what is good evidence when searching for support for a claim” by including this disclaimer. The fundamental case made in the sequence is primarily based on local knowledge and on dozens-to-hundreds of conversations I’ve had after spending many years in both the Bay and NYC EA communities, not on the relatively-quickly sourced links I included here to try to help communicate the case to those without the direct experience.

That is true, and the post has been edited in response. Thanks!

I think (given the username of the poster and some acquaintance with those who prompted this post) that it would take the efforts of many interpretability researchers to even guess as to whether there was serious intent, humorous intent, or any intent at all behind the writing of this post. 

I am absolutely enamorhorrified by the earnestness on display here. Please, please continue with your research to make sure your work stays aligned with our principles. Actually, maybe just take a nap. A really really long nap.

I do wish that I had been more constructive in my own reply, rather than merely arguing against your arguments. I will try to remedy that here by specifically addressing why I declined an invitation to sign, despite my understanding that there are genuine problems of racism and sexism in our community and my desire to work against them. 

As I led with, I am strongly ambivalent. So while it would not take much to tip me over into the belief that signing is probably net good, it would take a great deal more to assuage me that all harms of such a pledge were acknowledged and mitigated. I updated positively on Rocky's response to Duncan, explaining the rationale of removing specific actions due to an attempt to create a generalized statement upon which groups (with varying levels of resources) could build. If there were language in the pledge that clearly addressed this, it would probably be sufficient for me to sign and cautiously endorse the pledge. 

The best possible version of this proposal that I can imagine is not a pledge, but a roll call. For example, I would be completely on board if EA-NYC had finished their public DEI policy and made an announcement to the effect of: "We condemn racism and sexism, here is what we are trying to do about it. Please give us feedback and feel free to use any element of ours to establish your own policy. Once you have done so, please sign and link to your own policy, and in that way we can make a strong demonstration to those who are uncertain about the EA community's commitment against bigotry."

Similarly, I do think the statement as written can easily be perceived as applause lights. I think it can be (and is) completely true that many EAs' experience is that racism and sexism are already universally condemned and that community builders regularly encounter those with uncertainty. So I am very empathetic to the perceived need to put out a statement even before specific proposals are ironed out. (Having been through the process myself, I can well imagine the "weeks" that the simple pledge above took were not actually workweeks of any individual(s), but merely the difficulty of establishing any kind of consensus around messaging). If the pledge made clear that it did not aim to reify some new commitment to anti-racism/anti-sexism and was intended primarily to be a reference of common knowledge towards which we could direct uncertain newcomers (or antagonistic journalists), that too might have been sufficient to convince me to sign.

The largest factor for me is that which Duncan addressed: the potential consequences of dividing of the relevant parties into those who did sign, and those who didn't. I see the value in a list of names, I really do. And as I've said, now that the list exists, it wouldn't take very much more to get me to sign it. But I would still prefer a world where there wasn't one. 

I model those who don't see the possible harm as making the same (imho) mistake as those who dismiss privacy concerns with "if you don't have anything to hide, you don't have anything to fear." Because who could object to declaring oneself against bigotry? I don't really think it's probable that anyone in EA will weaponize the division any more than I think it's dangerous that some people post their home addresses on lists of EA couchsurfing options. I nevertheless want to support robust privacy norms, and norms against creating lists of "Right Minded" individuals.

I think the value created by the list, of demonstration of commitment, could be accomplished better by lists of actions taken (and, in EA fashion, a bunch of discussion about the best ways to measure the impact of those actions). I don't have a suggestion on how to mitigate the potential harms of dividing people into signers and not signers (besides my most vehement exhortation to not create a list of non-signers. If the list of those invited to sign existed and clearly delineated between those who signed and those who didn't, I would absolutely object to the entire attempt). I'm not sure if it would have been good to add language that acknowledges the possible harm, though I would appreciate it. 

I do want to note that in addition to the comments above, a primary consideration for why I did not/have not yet signed the pledge is that I do not speak on behalf of any community building organization. I do community building work, and may do so in a more official capacity in the near future. I am in the early stages of establishing a role that will hopefully come to fruition; if I were currently holding that position I would have signed the pledge (to represent the stance of the organization) and voiced my objections. As it stands, I think it is valuable to speak as an individual (the only, so far as I know) invited to sign the pledge who has declined. Because although some significant part of my objection is that the existence of such a pledge could in theory be used as a weapon against those who do not sign it, in practice I do not believe the individuals who created the pledge would in fact do so.

I am strongly ambivalent about the publishing of the pledge as written; I was invited to sign by someone who I trust and respect, the original post is made by someone who I've clearly seen exhibit thoughtfulness and acumen in thorny situations, and yet when I initially read the statement my thoughts were largely along the same lines as Duncan's. I came here to find that he'd articulated them more thoroughly than I could. As is so often the case, his opinion is clearer than mine, pointing in the same direction, but also seems to go further than mine. 

In this instance I think the claim of probable harm by crowding out the space is overstated, and I am moved by Lorenzo's frame of "the counterfactual is likely to have been nothing rather than something better." So rather than this being a well-intentioned pavestone on the road to hell or on the road to clear improvements, it's more I see it more as a symbolic marker at the fork of those two paths. Because I'm not sure which direction the community will take (or if this pledge actually is farther along the better/worse paths and not just in the fork), I haven't signed.

What I am more confident about is that this response of "Anti-racism and Anti-sexism in EA shouldn't be top cause areas for 99% of people" is clearly detrimental. Because while I agree with the quoted title, I'm also confident that the writers and signers of the pledge would agree with the title. Nothing that I've seen from the people involved, including the posting of this pledge, indicates that anyone believes that anti-racism and anti-sexism is considered a top cause area or that action in those directions should be prioritized over other work. The position that you are taking, and the obvious implication made by the "shallow pond and the sexist comment thought experiment" relies on the false conflation of "taking any action" with "advancing this cause to one's top priority." 

Rocky wrote, solicited feedback, and published the pledge as part of her role as a community builder, while doing other EA movement building work that I think can (arguably) be valued as a top cause area. The various signatories likewise took a few minutes to read and sign, and none of them indicated that they were choosing this as a focus. Granted, the lack of specificity is part of my problem with the pledge as written, but I don't think it justifies pushback of the nature: 

"Do Atlanta, Deloitte or MIT have no more important cause areas than this  and should really focus on this?" 

I model those who wrote and signed this pledge as considering taking some, limited, action on maintaining robust anti-racism and anti-sexism norms in the community as essential needs to keep the community healthy and functioning. Like any other work in EA infrastructure, this is just part of the necessary business; does someone spending part of the day making sure the WiFi is working in an EA office mean they think "providing internet access to EAs is a top cause area?"

Your post on the other hand, if it follows its own logic as I understand it, indicates that you think it should be a top cause area for yourself to argue against anti-racism and anti-sexism work. If you believe that writing and posting on the EA forum equates to considering a topic one's focus, then I genuinely ask: what moral calculus justifies your own response? I personally don't hold the view that writing a forum post is equivalent to prioritizing its topic over any other. I think, like keeping the WiFi running, the conversation initiated by Rocky and the criticism it generated in Duncan and myself are part of what keeps the EA movement healthy and capable of working effectively and impactfully on the causes we actually consider important.

Load more