PeterSlattery

Research @ MIT FutureTech/Ready Research
3331 karmaJoined Working (6-15 years)Sydney NSW, Australia
www.pslattery.com/

Bio

Participation
4

Researcher at MIT FutureTech helping with research, communication and operations and leading the AI Risk Repository. Doing what I consider to be 'fractional movement building'. 

Previously a behavior change researcher at BehaviourWorks Australia at Monash University and helping with development a course on EA at the University of Queensland.

Co-founder and team member at Ready Research.

Former movement builder for the i) UNSW, Sydney, Australia, ii) Sydney, Australia, and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia conference.

Founder and former lead for the EA Behavioral Science Newsletter.

See my LinkedIn profile for more of my work.

Leave (anonymous) feedback here.

Sequences
1

A proposed approach for AI safety movement building

Comments
410

Topic contributions
3

I like this idea, but wonder if CEA or another organization should take the lead on running something like this? Making donations to other people or informal groups is interpersonally and logistically complicated. For instance, people will often refuse a donation if offered it (or when their bank account is requested), and taking money from a person may feel like an obligation, or be misinterpreted. It could work better if they instead donate money to an org who allocates it for a person and contacts them to receive it (and donates it if they don't accept). That organization could also have a database of credible people/groups doing work and a general donation option for people who just want to fund movement building of a certain type.

More generally, I'd like to see more effort to proactively identify and fund people who are doing good movement/community building work. Examples: Talent scout type roles at CEA and some sort of community scan (e.g., a question in the annual community survey and post conferences about anyone who was exceptionally helpful) or input process (e.g., a form where you can notify CEA that someone was particularly helpful in a relevant way). 

This could also help address certain oversights. For instance, in my experience, someone can have a lot of positive impact in many regional areas without getting noticed or appropriately supported, which can significantly reduce their impact. Catherine Low and Luke Freeman were once examples of such people. 

Ok, I plan to share the PDF, so let me know when it is good to go!

Thanks! Do you want this shared more widely? 

Note that I estimate that putting these findings on a reasonably nice website (I generally use Webflow) with some nice interactive embedded (Flourish is free to use and very functional at the free tier) would take between 12-48 hours of work. You could probably reuse a lot of the work in the future if you do future waves.

I am also wondering if someone should do a review/meta-analysis to aggregate public perception of AI and other risks? There are now many surveys and different results, so people would probably value a synthesis. 

I am curious to better understand why people disagree here.

I really like the vote and discussion thread. In part because I think that aggregating votes will improve our coordination and impact. 

Without trying to unpack my whole model of movement building; I think that the community needs to understand itself (as a collective and as individuals) to have the most impact, and this approach may really help.

EA basically operates on a "wisdom of wise crowds" principle, where individuals base decisions on researchers' and thinkers' qualitative data (e.g., forum posts and other outputs)

However, at our current scale, quantitative data is much easier to aggregate and update on.

For instance, in this case, we are now seeing strong evidence that most people in EA think that AW is undervalued (as seems will be the case) relative to global development. Also who thinks what in relation to that claim and why. This is extremely useful for community and individual decision-making. It would never be captured in the prior system of coordinating via posts and comments.

Many people may/will act on or reference this information when they seek funding, write response posts, or choose a new career option. In a world without the vote, and just forum posts, these actions might otherwise not occur.

In short, very keen on this and to see more of this. 

Low confidence, but my intuition is that animal welfare is more neglected and would have a better ROI in terms of suffering reduced.

Thank you for this. I really appreciate this in-depth analysis, but I think it is unnecessarily harsh and critical in points. 

E.g., See: Hendrycks has it backwards: In order to have a real, scientific impact, you have to actually prove your thing holds up to the barest of scrutiny. Ideally before making grandiose claims about it, and before pitching it to fucking X. Look, I’m glad that various websites were able to point out the flaws in this paper. But we shouldn’t have had to. Dan Hendrycks and CAIS should have put in a little bit of extra effort, to spare all the rest of us the job of fact checking his shitty research. 

I will second the claim that Luke is exceptional, even amongst other exceptional people. He has a rare ability to simultaneously be incredibly impressive, warm, humble, caring, hardworking and productive. 

Hey! Yes, this is related to MIT/US immigration challenges and not something we can easily fix, unfortunately. We do sometimes hire people remotely. If you would like to express interest working with/for us, then you can submit a general expression of interest here.

Feel free to comment again if you have more specific questions, and I will do my best to answer. I may also ask HR to add more information about the position.

Load more