Hi everyone,
Over the past few months, I’ve noticed something I haven’t seen discussed much here — though maybe it should be.
I work part-time in operations at a small EA-aligned non-profit, and I’ve been increasingly using ChatGPT in my day-to-day work: drafting grant proposals, outlining reports, even helping shape early ideas for new programs. In quiet moments, I’ve started to wonder:
Is ChatGPT already changing how we do Effective Altruism — not in some grand philosophical way, but in the mundane, invisible workflows of our daily work? And is that something to celebrate… or something we should approach with caution?
A few personal observations:
- Writing efficiency: I used to spend hours polishing wording for donor communications. Now, a few well-placed prompts give me a great first draft. This saves time, yes — but does it also risk smoothing over nuance or making everything sound a bit “samey”?
- Research assistance: GPT helps me skim academic papers faster, extract key ideas, or brainstorm interventions I wouldn’t have thought of alone. But how much of this is "real understanding" vs. clever word prediction?
- Emotional detachment? Sometimes I worry that relying on a chatbot for idea-generation is making me less emotionally connected to the cause I’m working on. I get more done… but feel less invested. Anyone else experiencing this?
- Leveling the playing field: On the plus side, I’ve seen junior staff or people with less writing experience suddenly feel empowered — their ideas now come across more clearly, confidently, professionally. That feels deeply EA-aligned: enabling more people to contribute meaningfully.
I’m curious:
- How are you (if at all) using ChatGPT or similar tools in your EA work or thinking?
- Do you think its increasing role could change how we reason, communicate, or prioritize within EA?
- Are there hidden risks we’re overlooking — like epistemic distortions, groupthink, or an over-reliance on AI-generated "plausible" ideas?
- Or is this just a helpful new calculator — a tool that, when used wisely, helps us do more good, better?
I don’t have clear answers, but I’d love to hear from others — whether you're using these tools every day or actively avoiding them. I suspect this is one of those “slow revolutions” we’ll only fully understand in hindsight.
Warmly,
— An EA trying to keep both curiosity and caution in balance
Quickly: "and should we be worried or optimistic?"
This title seems to presume that we should either be worried or be optimistic. I consider this basically a fallacy. I assume it's possible to be worried about some parts of AI and optimistic about others, which is the state I find myself in.
I'm happy to see discussion on the bigger points here, just wanted to flag that issue to encourage better titles in the future.
I didn’t mean to suggest it’s an either/or thing, and I totally agree that it’s possible (and probably healthy) to feel both worry and optimism at the same time. That’s actually where I find myself most days too.
The title was more of a shorthand to capture that tension — not to say we must pick one side, but to get people into the headspace of asking: “What’s actually going on here, and how should we feel about it?”
I haven’t used it extensively for research tasks yet, but I do really worry about that. There is something I feel viscerally when I ‘get’ a paper that often requires a deep look into the mechanics of how a study was run (i.e. reading the whole thing), that’s just not going to come from a skim-read. There’s lots of nuance in the literature my intervention is based off of, that if I didn’t understand, would lead me to inappropriately embellish my results.
I think if I was using research tools, they’d save me a lot of time in the googling phase, but then I’d still skim papers for value, and hand-read the most important ones. Anecdotally, this seems to be what most full-time researchers do.
(I also find talking confidently about the details of papers impresses people I talk to, which can be valuable in and of itself)
There’s definitely something qualitatively different about really reading a paper vs. getting a summary, even a good one. I’ve noticed that when I rely too much on ChatGPT or similar tools to summarize studies, I sometimes end up with a false sense of confidence — like I “get” it, when actually I’ve missed key caveats or limitations that only become clear when reading the methods section or skimming figures.
I totally agree that tools can help in the “discovery” phase — finding relevant papers faster, generating search terms I hadn’t thought of, or even helping me decide which ones are worth digging into. But I still feel like the deep understanding (and the kind of judgment that comes with it) only happens when I go through the paper myself.
Also, yes to your point about being able to talk confidently about the details — I’ve had that experience too, where being fluent in a study’s methods actually changes how seriously people take an idea. That kind of credibility seems hard to fake with summaries alone.
I get your concern. I'll be the first to admit that I use ChatGTP pretty frequently - I find it very useful for polishing written docs or emails, or for getting me started with writing when I'm having a bit of brain block. However, while tools like ChatGPT are great for efficiency, I do worry they can lead to surface-level engagement, where people don’t fully grasp the complexities of the work or even the gist of what they are writing about. I also worry that relying on AI might make us miss key details and context, and I see your point that it could reduce our emotional connection to the cause. We might get more done, but at the cost of being less invested or maybe less EA aligned?
That said, AI does level the playing field for those with less experience, helping them communicate more clearly. I think/hope the key is finding the balance—using AI for efficiency without losing the depth, understanding, creativity, humaness and emotional connection that’s critical to the work.
Totally agree — “surface-level engagement” is exactly the phrase I’ve been circling around without quite naming it. That’s the subtle risk, I think: you feel productive, even insightful, but you haven’t actually done the real thinking yet. It’s like reading a menu and thinking you’ve tasted the meal.
And I really resonate with your point about emotional connection. When I’m too “efficient,” sometimes the work starts to feel oddly transactional — like I’m just slotting in the next block of text or ideas, rather than wrestling with them. I don’t think EA work has to feel emotionally intense all the time, but there’s a danger if it becomes purely mechanical.
That said, I’m with you: AI can absolutely empower people who might otherwise struggle to express their ideas clearly — whether due to language barriers, confidence, or just inexperience with writing. I’ve seen it give people a kind of voice they didn’t have before, and that feels like a win.
Have you found any specific habits or “guardrails” that help you stay on the deeper-thinking side when using ChatGPT?