When trying to pitch EA to someone who cares about local politics, climate change, social justice, being a doctor, or something else that you might not think is highest of EVs, I see most people getting it wrong. They lead off with something like, “well, it seems implausible that extreme climate change will be an existential risk, so you should probably focus on something else instead.” Put yourself in their shoes. If this was the first thing you’d ever heard about effective altruism, would you feel welcomed? I think it frames EA as adversarial and closed-minded, and some people won't give EA another shot.
I had these types of conversations a lot these when running an EA uni group, and have developed a mnemonic called the Inside-Out Model that I find helpful. Rather than immediately comparing the cause they are interested in with one you think is more impactful, start by applying the EA mindset within their cause, then work your way out.
For example: “you’re interested in climate change—great! Well, within climate change, it seems like certain interventions are way more effective than others, such as working on green technology or making effective donations.” This gives your conversation a vibe that seems congenial rather than dogmatic. After talking from within their cause area for a bit, transition out, with something like “in fact, in the same way there are more effective interventions than others for combating extreme climate change, there may be more effective causes than climate change altogether.” By this point, hopefully they will listen to your opinion in good faith.
I think it’s important to keep high-fidelity and not stay on the “inside” for too long. If you think AGI is more important than climate change, don’t roll over on your belly. But maybe wait 30 seconds.