Pato

Bio

Aspiring EA-adjacent trying to make the singularity further away

Comments
59

Topic contributions
1

"Models more powerful than GPT-4" are potential AGIs. That's what all the big AI companies are trying to build. Capitalism will push for large context window agents that can replace the workforce. The further we are from that, the further we are from x-risks.

Can you give examples in politics where you think putting up a fight was worse than not doing anything?

And also, how common do you think that is?

So no one should work on the worlds with the shortest timelines? Should they be given up?

Pato
15
1
0

This seems cool, but as a data point: it took me like 25 minutes to reach page 4, and page 5 asked questions which I didn't know the answer to so I didn't complete the form.

I take more time than most people for most things, but extrapolating I think it would have taken me a loot more than "15 minutes" to fill out the 11 pages.

Is it me or there is too much filler on some posts? This could have been a quicktake: "If you distance yourself from Rationality, be careful to not distance yourself from rationality".

Answer by Pato2
0
0

Take a Minute by K'naan about the value of giving and epistemic humility lol

Answer by Pato1
0
0

I recomend AISafety.com and, if you are looking for introductions in video/ audio form, I like this selection (in part because I contributed to it). 

None of them are that technical though, and given that you mentioned your math knowledge, seems that's what you're interested on. 

In that case, the thing that you have to know is that it is said that the field is "pre-paradigmatic", so there's not any type of consensus about how is the best way to think about the problems and therefore how would even the potential solutions would look like or come to be. But there's work outside ML that is purely mathematical and related to agent foundations, and this introductory post that I've just found seems to explain all that stuff better than I could and would probably be more useful for you than the other links. 

Answer by Pato2
0
0

I think something like that could be great! I also wish learning about EA wouldn't be that difficult or time consuming. About your questions:

  1. For the people who don't want to change careers because of EA, I think you can still share with them a list of ways they still can have a positive impact in the world, something like what PauseAI has. That shouldn't have a big cost and it would possible to expand about it later.
  2. Adding to what you wrote, other ways they can still have an impact are:
    1. Changing their habits of consumption, like becoming vegan and reducing their pollution
    2. Participating in the community, like in the forums or volunteering
    3. Sharing the ideas and content, like the video/ course itself
    4. Small everyday actions caused by learning more about EA, like expanding the moral circle, the causes and having a more rationalist mindset
    5. Stop working or investing on risky things, like AI capabilities
  3. I don't think so.
Pato
12
2
0

it appears many EAs believe we should allow capabilities development to continue despite the current X-risks.

Where do you get this from?

Also, this:

have < ~ 0.1% chance of X-risk. 

means p(doom) <~ 0.001

Load more