This seems like a not at all new or original idea, but maybe worth explicitly pointing out.
A lot of times at EA events I've heard something along the lines of "as an EA, I'm trying to maximize the amount of good I accomplish."
But this isn't quite what we should (in theory, assuming I don't care about my non-EA goals, which I totally do) be doing. Instead, as an EA I'm trying to maximize how awesome the world over time will be, not just the "awesomeness" that can be attributed to me.
More concretely, doing things like research, entrepreneurship, etc. are all shiny things that demonstrate that I've done lots of good. On the other hand, for AI Safety field building, spreading EA in areas where EA isn't really present, etc., the impact here can't always be traced back directly to me, but these also seem incredibly important and useful. I'm suprised at how many more safety researchers than field builders there are, and I'm guessing one reason is that it's difficult to pinpoint how much impact you've actually personally made as a field builder.
Most roles would probably also look slightly different if people focused less on their personal contribution to the world. For example, it might make sense for a researcher to give a cool project idea to a not-currently-occupied person who they know will be better at the project.
I think it's worth the effort to consciously try to feel tons of happiness from seeing altruism being accomplished, and not just from accomplishing altruism.
Yeah, this makes sense. That being said, I'm guessing while some people in theory are trying to maximize the "good" they accomplish, in practice it's easy to forget about options that aren't easily traceable. My point was also that it's worth explicitly putting in effort to look for these kinds of options.
By options, I mean something like giving a research project to a more capable person. I'm guessing some people wouldn't consider that this is a thing they can do.