CL

Chris Leong

Organiser @ AI Safety Australia and NZ
5988 karmaJoined Nov 2015Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
978

One difference between our perspectives is that I don't take for granted that this process will occur unless the conditions are right. And the faster a movement grows, the less likely it is for lessons to be passed on to those who are coming in. This isn't dismissing these people, just how group dynamics work and a reality of more experienced people having less time to engage.

I want to see EA grow fast. But past a certain threshold, I'm not sure exactly where it is, our culture will most likely start to degrade.  That said, I'm less concerned about this than before. As terrible as the FTX collapse and recent events have been, it may have actually resolved any worries about potentially growing too fast.

I’d imagine the natural functions of city and national groups to vary substantially.

I was previously very uncertain about this, but given the updates in the last week, I'm now feeling confident enough in my prediction of the future that I regret any money I put into my super (our equivalent of a pension).

People reading this comment, please do not interpret this as financial advice, rather just a statement of where I am at.

A few questions that you might find helpful for thinking this through:

• What are your AI timelines?
• Even if you think AI will arrive by X, perhaps you'll target a timeline of Y-Z years because you think you're unlikely to be able to make a contribution by X
• What agendas are you most optimistic about? Do you think none of these are promising and what we need are outside ideas? What skills would you require to work on these agendas?
• Are you likely to be the kind of person who creates their own agenda or contributes to someone else's?
• How enthusiastic are you about these subjects? Are you likely to be any good at them? Many people make a contribution without using things outside of computer science, but sometimes it takes a person with outside knowledge to really push things forward to the next level.

Do the intro fellowship completions only include EA Intro Fellowship, not people doing the AI Safety Fundamentals course?

My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the 'narrow EA' strategy is a mistake because there's a good chance it is unethical to try to guide society without broader societal participation. 


I suppose it depends on how much of an emergency you consider the current situation to be.

If you think it's truly a dire situation, I expect almost no-one would reason as follows: "Well, we're insufficiently diverse, it'd be immoral for us to do anything, we should just sit over here and wait for the end of the world".

I suspect that, at least in these circumstances, a more productive lens is the lens of responsibility, where those who are afforded disproportionate influence are responsible to use it for the good of all and to strive to be conscious of potential blindspots due to selection biases. 

Just to clarify, the above paragraphs are an argument against "it is unethical to try to guide society without broader societal participation" rather than an argument for narrow EA. I support the latter as well, but I haven't made an argument for it here.

If EA decided to pursue the politics and civil society route, I would suggest that it would likely make sense to follow a strategy similar to what the Good Ancestors Project has been following in Australia. This project has done a combination of a) outreach to policy-makers b) co-ordinating an open letter to the government c) making a formal submission to a government inquiry d) walking EA's through the process of making their own submissions (you'd have to check with Greg to see if he still thinks all of these activities are worthwhile).

Even though AI Policy seems like the highest priority at the moment, there are benefits of working on multiple cause areas since a) you can only submit to an inquiry when one is happening, so more cause areas increases the chance that there is something relevant b) there's a nice synergy that comes from getting EA's who have different cause areas as their main focus to submit to the inquiries for other areas.

Greg has a great explanation where he talks about EA having spent a lot of effort figuring out how to leverage our financial capital and our career capital to make the world better, but that we've been neglecting our political capital. Obviously there's the question of whether we have good ways to deploy that capital, but I suspect that this answer is that we do.

I'm not claiming that this is necessarily the route forward, but it is likely worth exploring in countries with well-developed EA communities.

If this ends up succeeding, then it may be worthwhile asking whether there are any other sub-areas of EA that might deserve their own forum, but I suppose that's more a question to ask in a few months.

To be honest, I don't really see these kinds of comments criticising young organisations that likely have access to limited amounts of funding to be helpful. I think there are some valid issues to be discussed, but I'd much rather see them discussed at an ecosystem level. Sure, it's less than ideal that low-paid internships provide an advantage to those from a particular class, but it's also easier for wealthier people to gain a college degree as well, I think it'd be a mistake for us to criticise universities for offering college degrees. At least with these internships, you're being paid something, as opposed to accruing debt, so they're actually much more accessible than the comparative.

But I suppose this doesn't address my real objection which is that there are people who are willing to work to make the world better and an organisation that is willing to provide them with some financial support to make it happen. In return, these people gain the opportunity to develop new skills and if these interns are particularly talented, they are likely to be referred on to further opportunities. They might even change the course of someone's career: someone who was just going to go into the business world might end up having a highly impactful career instead.

So I guess it just feels like that given how many benefits there are, we should have a really high bar for standing in the way of things. And I don't really feel that this is met here. There's so much that is horrible in the world, but we have the opportunity to change that. And if that involves a large number of 1000 EURO/month internships, well, that seems like an incredibly low price to pay.

Load more