This seems like a not at all new or original idea, but maybe worth explicitly pointing out.
A lot of times at EA events I've heard something along the lines of "as an EA, I'm trying to maximize the amount of good I accomplish."
But this isn't quite what we should (in theory, assuming I don't care about my non-EA goals, which I totally do) be doing. Instead, as an EA I'm trying to maximize how awesome the world over time will be, not just the "awesomeness" that can be attributed to me.
More concretely, doing things like research, entrepreneurship, etc. are all shiny things that demonstrate that I've done lots of good. On the other hand, for AI Safety field building, spreading EA in areas where EA isn't really present, etc., the impact here can't always be traced back directly to me, but these also seem incredibly important and useful. I'm suprised at how many more safety researchers than field builders there are, and I'm guessing one reason is that it's difficult to pinpoint how much impact you've actually personally made as a field builder.
Most roles would probably also look slightly different if people focused less on their personal contribution to the world. For example, it might make sense for a researcher to give a cool project idea to a not-currently-occupied person who they know will be better at the project.
I think it's worth the effort to consciously try to feel tons of happiness from seeing altruism being accomplished, and not just from accomplishing altruism.
In the context of this post, I read "my contribution to good" to mean "good done that is clearly attributed to me" rather than "my counterfactual impact".
Though I'd also usually think "my contribution" is my "counterfactual impact", I still think this reframing ("I am maximizing how much good I do" to "I am maximizing how good the world is") might be instrumentally very useful for feeling good about more indirect ways of having a counterfactual impact.