Pato

Technical Ops @ PauseAI

Bio

Aspiring EA-adjacent trying to make the singularity further away

Comments
61

Topic contributions
1

When a lot of people (like me) say “#notallEAs” they are probably not saying it anecdotally to refer to themselves, as you are implying. They’re just pointing to the overlap. So I think that part of the post is misguided.

Even if the last question is misguided, if I, a supporter of PAI, were to consider myself an EA, why would that be? 
There are several possible reasons: I changed my career plans because of it; I work and have been working thanks to EA funding (by working at PauseAI lmao); I’ve been in the EA hotel for a bunch of months and plan to go back to it or the PauseAI Hotel (which is right next to it); I’ve attended conferences, received grants, and read a couple of books; I check out the forum here and there, agree with the philosophy, agree with the median EA in models of the world and ethics more than with any other median X; I plan to donate to EA orgs soon and want to keep engaging with the community, etc.
The list seems pretty big in contrast to “but the core funders and leaders aren’t supporting the advocacy for an AI moratorium.”

I also don’t think any of your arguments are good enough to justify disengaging with the EA movement if a specific person agrees with the philosophy and has only a handful of disagreements or problems with the EA median member/ movement. This applies to Rationalist spaces too, to a certain extent.

It’s not like there are better alternatives to it for people who are trying to figure out important things about the world and how to improve it.
Even if you think a lot of them have a huge bias in some specific regard, you can still interact with them with that in mind, and you are still less likely to find other biases in them than in any other big community by a large margin. You’re still much more likely to find people who are very knowledgeable + kind + smart + dedicated to doing good in EA than in any other space that I know of. People who can change your mind, fund part of your work, or help you on your path to having a better impact in the world in other ways. 

It’s really good to be mindful of the ways some groups have some control over the community and their potential biases and personal interests. But if the response to that is disengaging with the community instead of defending your disagreements here and there, then you’re giving them more power.

I don’t think this framing that either everyone/ a majority in EA needs to support X tactic/ cause/ org or they’re all against it is fair or useful at all. As others have mentioned, there’s a bunch of overlap between PauseAI and EA, of people, of donors, of beliefs… I don’t think most of the opinions in the post represent the opinions of the people who I’ve met at PauseAI.

I’m surprised that a post that overgeneralizes so much while making a bunch of ad hominems attacks is net positive votes-wise.

"Models more powerful than GPT-4" are potential AGIs. That's what all the big AI companies are trying to build. Capitalism will push for large context window agents that can replace the workforce. The further we are from that, the further we are from x-risks.

Can you give examples in politics where you think putting up a fight was worse than not doing anything?

And also, how common do you think that is?

So no one should work on the worlds with the shortest timelines? Should they be given up?

Pato
15
1
0

This seems cool, but as a data point: it took me like 25 minutes to reach page 4, and page 5 asked questions which I didn't know the answer to so I didn't complete the form.

I take more time than most people for most things, but extrapolating I think it would have taken me a loot more than "15 minutes" to fill out the 11 pages.

Is it me or there is too much filler on some posts? This could have been a quicktake: "If you distance yourself from Rationality, be careful to not distance yourself from rationality".

Answer by Pato2
0
0

Take a Minute by K'naan about the value of giving and epistemic humility lol

Answer by Pato1
0
0

I recomend AISafety.com and, if you are looking for introductions in video/ audio form, I like this selection (in part because I contributed to it). 

None of them are that technical though, and given that you mentioned your math knowledge, seems that's what you're interested on. 

In that case, the thing that you have to know is that it is said that the field is "pre-paradigmatic", so there's not any type of consensus about how is the best way to think about the problems and therefore how would even the potential solutions would look like or come to be. But there's work outside ML that is purely mathematical and related to agent foundations, and this introductory post that I've just found seems to explain all that stuff better than I could and would probably be more useful for you than the other links. 

Answer by Pato2
0
0

I think something like that could be great! I also wish learning about EA wouldn't be that difficult or time consuming. About your questions:

  1. For the people who don't want to change careers because of EA, I think you can still share with them a list of ways they still can have a positive impact in the world, something like what PauseAI has. That shouldn't have a big cost and it would possible to expand about it later.
  2. Adding to what you wrote, other ways they can still have an impact are:
    1. Changing their habits of consumption, like becoming vegan and reducing their pollution
    2. Participating in the community, like in the forums or volunteering
    3. Sharing the ideas and content, like the video/ course itself
    4. Small everyday actions caused by learning more about EA, like expanding the moral circle, the causes and having a more rationalist mindset
    5. Stop working or investing on risky things, like AI capabilities
  3. I don't think so.
Load more