PF

Pedro Freire

@ Independent

Posts
1

Sorted by New

Comments
15

This was inspiring to read :)

For some people I think it is important to want more money, as they might have opportunities to use it quite effectively. I am mentioning this because I imagine some people might benefit from becoming more open to wanting a higher income. Sometimes people can be in a particularly good position to put more money to good use.

One big thing I learned is that we are in a narrow window of time. AI is growing so fast, and the values we put into it today are going to stay there for a long time. It’s a lot easier to fix the foundation of a house while you're building it than it is to try and change it after the whole building is up.

I agree this looks important. (It is kind of hard to say what to do about this.)

I was super hesitant about sharing this here, because indeed it is missing a lot of context.

Honestly, it is extremely demoralizing to be sincere and vulnerable in asking for help, and have that be called emotional manipulation.

Here's a reflection Claude wrote about my original quick take:

"Does EA have a blind spot around personal hardship?

EA culture is pretty good at thinking about suffering at scale — but I sometimes wonder if it struggles to respond well when suffering shows up close and personal.

If EA communities can't extend basic good faith to someone asking for help in a moment of need, is that a failure of the culture? We talk a lot about optimizing impact, but a reflexive suspicion toward personal appeals might mean we're leaving real, immediate suffering unaddressed — and making people feel worse in the process."

[I removed this quick take because it was vulnerable to share and there were lots of important layers that were missing from the story to do it justice.]

Maybe a more clarifying and charitable title for an 'AI As Normal Technology'-like position would be 'No Major Technological Revolution Has Been Normal'.

Here are some bullet points of reflection topics around lifestyle and priorities for EAs that I shared with some fellow EAs some months ago. I am sharing this text here in case it interests anyone. I will elaborate and expand on them more and better later if I have the opportunity.

""" Support Systems: Seriously. I didn't even know this term until after all this happened, and it would have changed everything. There's something about how people are instructed in STEM institutions (and as a consequence, many EA institutions) that makes it all about careers, how one's impact is understood by their public professional life. And then it turns out that in reality a lot of the most publicly impactful people have these incredibly beautiful family and fraternity systems that were at the core of everything they've done, that never get talked about. Too many yang, public, external, wikipedia-worthy archetypes of impact. It would be really awesome if every youngling EA-in-training knew that having strong and abundant support systems, investing in true family and friends, investing in intimacy, figuring out relationships, being connected to non-EAs... that this sort of thing might be not a distraction from impact but a foundation for impact.

Something something about impact theory: I don't know, there's something about EA theory where it wants it to be really convincing that being an EA is the most important thing to do, but somewhere in all the moral arguments, it takes way too many shortcuts. By taking shortcuts to force it to be the case that being an EA is the right moral thing to do, you are forced to ignore and push under the rug all forms of impact that don't currently fit well into EA career stories and don't have a legible trace of impact connecting it to an EA. I don't really know how to solve this. If I were to give any pointers, here's what first comes to mind:

-- Legibility: there's a serious expectation that impact has to be legible. This is baked into the EA foundation. Unfortunately, in the real world, there are probably more illegible actions of impact than legible ones. Sure, I think we've been adding footnotes on EA material about this, but this is not a small thing that can be addessed separately from the rest of the decision-making. It truly affects the foundation on which the majority of EA arguments are based upon. One has to be able to make decisions in the world incorporating and accepting the fact that the majority of impact is fundamentally illegible, made by people you won't get to know personally, that sometimes public information and public consensus about events can be pretty irrelevant when it comes to understanding and planning on the ground.

-- Argumentation: there's an expectation that truth is found by finding the best arguments. This is true in all cases where this is true, except in all the cases where it isn't. This stems from the above; arguments rely on legible, shared-knowledge facts, and there's just so much of what decides what happens on a daily life basis that is far removed from that. Simplifications are incredibly robust in some cases, and incredibly illusory in others. Obviously we don't want to abandon arguments, but more like, grow beyond them.

-- Curiosity and connection: The majority of good human beings are not EAs! What are they all truly up to?"""

I guess the overall point here is that thinking of saving lives purely in terms of dollars feels like a type error; for it is totally possible today to save lives as an investment, by saving undersupported people from their precarious realities and allowing them to be productive economic workers, eventually generating more revenue than the initial life saving investment.

Load more