PeterSlattery

Research @ MIT FutureTech/Ready Research
3230 karmaJoined Working (6-15 years)Sydney NSW, Australia
www.pslattery.com/

Bio

Participation
4

Affiliate researcher at MIT FutureTech helping with research, communication and operations. Doing some 'fractional movement building'. 

Previously a behavior change researcher at BehaviourWorks Australia at Monash University and helping with development a course on EA at the University of Queensland.

Co-founder and team member at Ready Research.

Former movement builder for the i) UNSW, Sydney, Australia, ii) Sydney, Australia, and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia conference.

Founder and former lead for the EA Behavioral Science Newsletter.

See my LinkedIn profile for more of my work.

Leave (anonymous) feedback here.

Sequences
1

A proposed approach for AI safety movement building

Comments
398

Topic contributions
3

Quick response - the way that I reconcile this is that these differences were probably just due to context and competence interactions. Maybe you could call it comparative advantage fluctuations over time?

There probably no reasonable claim that advising is generally higher impact than Ops or vice versa. It will depend on the individual and the context. At some times, some people are going to be able to have much higher impact doing ops than advising, and vice versa.

From a personal perspective my advising opportunities very greatly. There are times where most of my impact comes from helping somebody else because I have been put in contact with them and I happen to have useful things to offer. There are also times where the most obviously counteractually impactful thing for me to do is to do research or some sort of operations work to enable other researchers. Both of these activities kind of have lumpy impact distributions because they only occur when certain rare criteria are collectively met.

In this case Abraham may have had much better advising opportunities relative to operations opportunities while this was not true for Peter.

Just wanted to quickly say that I hold a similar opinion to the top paragraph and have had similar experiences on terms of where I felt I had most impact.

I think that the choice of whether to be a researcher or do operations is very context dependant.

If there are no other researchers doing something important your competitive advantage may be to do some research because that will probably outperform the counterfactual (no research) and may also catalyze interest and action within that research domain.

However if there are a lot of established organizations and experienced researchers, or just researchers who are more naturally skilled than you already involved in the research domain, then you can often have a more significant impact by helping to support those researchers or attract new researchers.

One way to navigate this is to have a what I call a research hybrid role where you work as researcher but allocate some flexible amount of time to more operations / field building activities depending on what seems most valuable.

I haven't encountered any donors complaining that they were misled by donation matching offers, and I'm not aware of any evidence that offering donation matching has worse impacts than not having it, either in terms of total dollars donated or in attempts to increase donations to effective charities.

However, I haven't been actively looking for that evidence - is there something that I've missed?

Fair. Perhaps during the post event survey you could ask people who have attended previous events if they want to report any significant impacts from those past events? Then they can respond as relevant.

Thanks for writing this. I just wanted to quickly respond with some thoughts from my phone.

I currently like the norm of not publicly affiliating with EA but its something I've changed my mind about a few times.

Some reasons

I think EA succeeds when it is no longer a movement and simply a general held value system (i.e., that it's good to do good and try to be effective and to have a wide moral circle). I think that many of our insights and practices are valuable and underused. I think we disseminate these most effectively when they are unbranded.

This is why: Any group identity creates a brand and opportunities for attacking the brand of that group and doing damage by association.

Additionally, if you present as being in a group then you are either classed as ingroup or outgroup. Which group your in overwhelmingly affects people's receptiveness to you and your goals. For many people EA presents as a threatening outgroup which makes them feel judgement and pressure.

Many people who might be receptive to the idea of counterfactural or cost effectiveness related reasoning if it has a neutral source but unreceptive to it if they believe it comes from effective altruism.

I think most people care about progress on salient issues not effective altruism in the abstract. People are much more likely to be interested in philosophy which helps them to achieve a goal of reducing animal suffering or mental health issues or AI risks than figuring out how to be more effective at doing good.

What I think we should do

I think we should say things like I have been influenced by effective altruism, or reading doing good better really changed my mind about X but I think we should avoid presenting as EAs.

This seems consistent with my other behaviour. I don't call myself a feminist, consequentialist, animal welfare activist or longtermist etc, but these are all ideas/values which influenced me a lot.

I previously discussed the idea of fractional movement building and I think that that is still probably the best approach for EA to have more impact via influence on others. Basically you work on a thing that you think is important (e.g., a cause area) and allocate some fraction of your time to try to help other people to work on that thing or other things which are EA aligned.

So rather than being an EA affiliated movement builder you might be researcher trying to understand how to have some type of positive impact (e.g. reducing risks from AI) and navigating the world with that as your superficial identity. You can still organise events and provide support with that identity and personal brand and there no brand risk or association to worry about. You can mention your EA interest where relevant and useful.

And to be clear I think that people who are clearly ea-affiliated are doing good work and mostly aren't at any risk of major career damage etc. I just think that we might be able to achieve more if we worked in ways that they pattern matched better to more widely accepted and unbranded routes to impact (e.g., researcher or entrepreneur etc) than to activist/movement building type groups which are more associated with bad actors (e.g. cults and scams)

Of course these are not strong beliefs and I could easily change them as I have before.

Thank you for this. I really appreciate this research because I think the EA community should do more to evaluate interventions (e.g., conferences, pledges, programs etc) considering the focus on cost-effectiveness etc. Especially repeat interventions. I also like the idea of having independent evaluations.

  • Having said that, good evaluations are very hard to do and don't always offer a comparatively good ROI in expectation as compared to other uses of resources. 
  • I think CEA are doing a very good job with conferences now, and feel pretty confident that the EA conference provides a lot of value. However, I am weakly in favor of it being evaluated more.
  • From my personal experience attending conferences, I'd like to suggest two considerations
    • First, 6 months is probably too short a timeframe to measure conference impact - many of my most valuable changes in behavior (e.g., starting new projects/collaborations, or providing (and getting) support/advice) occurred years after the conference where I met someone for the first time. 
    • Second, I think that most of the conferences impacts come from network related behaviors and these don't seem to be well captured. Many if not most of my higher impact actions only occurred because of people I met at EA Conferences. 

Thanks. My impression is that they are using 'Guest author' on their blog post to differentiate who works for Epoch or is external. As far as I can tell, that usage implies nothing about the contribution of the authors to the paper.

This seems misleading. Some of the authors are from Epoch, but there are authors from two other universities on the paper. 

Also, where does it say that he is a guest author? Neil is a research advisor for Epoch and my understanding is that he provides valuable input on a lot of their work. 
 

Answer by PeterSlattery42
4
3
1
4
1

Disclosure: I have past, present, and potential future affiliations with MIT FutureTech. These views are my own.

Thank you for this post. I think it would be helpful for readers if you explained the context a little more clearly; I think the post is a little misleading at the moment. 

These were not “AI Safety” grants; they were for “modeling the trends and impacts of AI and computing” which is what Neil/the lab does. Obviously that is important for AI Safety/x-risk reduction, but it is not just about AI Safety/x-risk reduction and somewhat upstream.

Importantly, the awarded grants were to be disbursed over several years for an academic institution, so much of the work which was funded may not have started or been published. Critiquing old or unrelated papers doesn't accurately reflect the grant's impact.

You claim to have read 'most of their research' but only cite two papers, neither funded by Open Philanthropy. This doesn't accurately represent the lab's work.

Your criticisms of the papers lack depth, e.g., 'This paper has many limitations (as acknowledged by the author)' without explaining why this is problematic. Why are so many people citing that 2020 paper, if it is not useful? Do you do research in this area, or are you assuming that you know what is useful/good research here? (genuine question - I honestly don't know).

By asking readers to evaluate '$16.7M for this work', you imply that the work you've presented was what was funded, which is not the case.

Could you please update your post to address these issues and provide a more accurate representation of the grants and the lab's work?

Now, to answer your question, I personally think the work being done by the lab deserves significant funding. Some reasons:

  • I think modeling the trends and impacts of AI and computing is very important and that it is valuable for OP to be able to fund very rigorous work to reduce their related uncertainties. 
  • I think that it is very valuable to have respected researchers and institutions producing rigorous and credible work; I think that the impact of research scales superlinearly based on the credibility and rigor of the researchers. 
  • The lab is growing very rapidly and attracting a lot of funding and interest from many sources.
  • The work is widely cited in policy documents, including, for instance, the 2024 Economic Report of the President.
  • The work is widely covered in the media.
  • Neil seems to be well respected by those who know him. I joined the lab after I spoke to a range of people I respect about their experiences working with him. Everyone I spoke with was very positive about Neil and the importance of his work. My experiences at the lab have reinforced my perspective. 
  • Many of the new and ongoing projects (which I cannot discuss) seem quite neglected and important (e.g., they respond to requests from funders and I don't know of other research on them). I expect they will be very valuable once they are released.
  • The lab is interdisciplinary and has a very broad, balanced and integrative approach to AI trends and impacts. Neil has a broad background and knowledge across many domains. This is reflected by how the lab functions; we hire and engage with people across many areas of the AI landscape, from people working on hardware and algorithms, to those working directly on AI risks reduction and evaluation. For instance, see the wide range of attendees at our AI Scaling Workshop (and the agenda). This seems rare and valuable (especially in a place like MIT CSAIL). 
Load more