Hi everyone,
Over the past few months, I’ve noticed something I haven’t seen discussed much here — though maybe it should be.
I work part-time in operations at a small EA-aligned non-profit, and I’ve been increasingly using ChatGPT in my day-to-day work: drafting grant proposals, outlining reports, even helping shape early ideas for new programs. In quiet moments, I’ve started to wonder:
Is ChatGPT already changing how we do Effective Altruism — not in some grand philosophical way, but in the mundane, invisible workflows of our daily work? And is that something to celebrate… or something we should approach with caution?
A few personal observations:
- Writing efficiency: I used to spend hours polishing wording for donor communications. Now, a few well-placed prompts give me a great first draft. This saves time, yes — but does it also risk smoothing over nuance or making everything sound a bit “samey”?
- Research assistance: GPT helps me skim academic papers faster, extract key ideas, or brainstorm interventions I wouldn’t have thought of alone. But how much of this is "real understanding" vs. clever word prediction?
- Emotional detachment? Sometimes I worry that relying on a chatbot for idea-generation is making me less emotionally connected to the cause I’m working on. I get more done… but feel less invested. Anyone else experiencing this?
- Leveling the playing field: On the plus side, I’ve seen junior staff or people with less writing experience suddenly feel empowered — their ideas now come across more clearly, confidently, professionally. That feels deeply EA-aligned: enabling more people to contribute meaningfully.
I’m curious:
- How are you (if at all) using ChatGPT or similar tools in your EA work or thinking?
- Do you think its increasing role could change how we reason, communicate, or prioritize within EA?
- Are there hidden risks we’re overlooking — like epistemic distortions, groupthink, or an over-reliance on AI-generated "plausible" ideas?
- Or is this just a helpful new calculator — a tool that, when used wisely, helps us do more good, better?
I don’t have clear answers, but I’d love to hear from others — whether you're using these tools every day or actively avoiding them. I suspect this is one of those “slow revolutions” we’ll only fully understand in hindsight.
Warmly,
— An EA trying to keep both curiosity and caution in balance
There’s definitely something qualitatively different about really reading a paper vs. getting a summary, even a good one. I’ve noticed that when I rely too much on ChatGPT or similar tools to summarize studies, I sometimes end up with a false sense of confidence — like I “get” it, when actually I’ve missed key caveats or limitations that only become clear when reading the methods section or skimming figures.
I totally agree that tools can help in the “discovery” phase — finding relevant papers faster, generating search terms I hadn’t thought of, or even helping me decide which ones are worth digging into. But I still feel like the deep understanding (and the kind of judgment that comes with it) only happens when I go through the paper myself.
Also, yes to your point about being able to talk confidently about the details — I’ve had that experience too, where being fluent in a study’s methods actually changes how seriously people take an idea. That kind of credibility seems hard to fake with summaries alone.