Research @ MIT FutureTech/Ready Research
3123 karmaJoined Working (6-15 years)Sydney NSW, Australia



Affilate researcher at MIT FutureTech helping with research, communication and operations. Doing some 'fractional movement building'. 

Preiously a behaviour change researcher at BehaviourWorks Australia at Monash University and helping with development a course on EA at the University of Queensland.

Co-founder founder and team member at Ready Research.

Former movement builder for the i) UNSW, Sydney, Australia, ii) Sydney, Australia, and iii) Ireland, EA groups.

Marketing Lead for the 2019 EAGx Australia conference.

Founder and former lead for the EA Behavioral Science Newsletter.

See my LinkedIn profile for more of my work.

Leave (anonymous) feedback here.


A proposed approach for AI safety movement building


Topic contributions

Thanks for writing this and for your good intentions. Sorry, you haven't received more feedback!

My quick thought is that you should probably try to work for one of the organisations doing something like your project before you attempt to start a new organisation. There are usually a lot of useful things you can learn from established alternative project, including how and why they operate as they do. Additionally, it is probable that helping something big be a little better is more impactful in expectation than doing something novel and risky which probably won't succeed or scale (based on the base-rates for new project in EA and elsewhere).  

Of course that's just a quick opinion from a quick read but I hope it is helpful.

Thanks! His post definitely suggests awareness and interest in EA.

I wonder what happened with the panel. He said he would be on it, l but from what I can see in that video, he wasn't. I imagine that someone could find out what happened there by contacting people involved in organising that event. I don't care enough to prioritise that effort but I'd appreciate learning more if someone else wants to investigate.

Thanks for following up! This evidence you offer doesn't persuade me that most EAs are extremely rich guys because it's not arguing that. Did you mean to claim that most EAs who are rich guys are not donating any of their money or more than the median rich person? 

I also don't feel particularly persuaded by that claim based on the evidence shared. What are the specific points that are persuasive in the links - I couldn't see anything particularly relevant from scanning them. As in nothing that I could use to make an easy comparison between EA donors and median rich people. 

I see that "Mean share of total (imputed) income donated was 9.44% (imputing income where below 5k or missing) or 12.5% without imputation." for EAs and "around 2-3 percent of income" for US households" which seems opposed to your position. But I haven't checked carefully and I am not the kind of person who makes these sorts of careful comparisons very well.

I don't have evidence to link to here, or time to search for it, but my current beliefs are that most of EAs funding comes from rich and extremely rich people (often men) donating their money.  

Thanks for the input!

I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.

It's true that he doesn't mention donations. I don't think that discounts his alignment in other ways.

Useful to know he might not be genuine though.

Also someone messaged me about a recent controversy that Bryan was involved in. I thought he had been exonerated but this person thought that he had still done bad things.


And his response

Worth knowing about when judging his character.

Yeah I think that's part of it. I also thought it was very interesting how he justified what he was doing as being important for the long term future given the expected emergence of superhuman AI. E.g., he is running his life by an algorithm in expectation that society might be run in a similar way.

I will definitely say that he does come across as hyper rational and low empathy in general but there's also some touching moments here where he clearly has a lot of care for his family and really doesn't want to lose them. Could all be an act of course.

Thanks for sharing your opinion. What's your evidence for this claim?

Yeah, he could be planning to donate money once his attempt to reduce our overcome mortality is resolved.

He said several times that what he's doing now is only part one of the plants so I guess there is a opportunity to withhold judgment and see what he does later.

Having said all that I don't want to come across as trusting him. I just heard the interview and was really surprised by all the EA themes which emerged and the narrative he proposed for why what he's doing is important

Load more