I was introduced to effective altruism when an advocate joined a reading group that I lead. Over the subsequent years, this person has become increasingly influential upon me, not only because of his ideas and methodical thinking, but because of the exemplary quality of his life and commitment to his principles. Around the same time, I also started to experience increasing conflict with the evangelical church I was a member of, leading to a measure of disillusionment, and from there a re-evaluation of my beliefs, which is ongoing.
Currently, I work in software development centered around data engineering and analytics. However, my training is in the humanities, and I ultimately hope to make my way back into education and research at some point.
I am new to the EA movement, and so I have a lot to learn. I've hit a plateau in my career development and need some help reaching the next thing, whatever that is.
I feel that there will be many individuals here with far more talent than I in the relevant areas. I have a high-level of reading fluency in classical Greek and Hebrew, and a comparable skill set to a graduate student in religious studies. I have a lot of practical experience with the evangelical church in North America. I have some experience as a software developer in a somewhat unusual context. I love to help people by:
Orwell's great. Sometimes cryptic communication is a useful means to communicate to an in group something that you want to hide from the wider audience. For example, a common interpretation of Jesus's parables is that they expressed political ideas cryptically which it would be unacceptable for him to state outright. He always had plausible deniability as to their meaning, which was nonetheless obvious to his hearers. Not really sure what the context is on this board that would require something like that though? Are the EAs liable to call together the council of moderators in the middle of the night and shadow ban someone for wrong think?
This particular metaphor really resonated with me for whatever reason.
I'm trying to career switch. I have small children in the family to care for. My current role is very demanding. I have pretty limited resources to put towards job hunting right now. I did not go to a top college. I'm not an elite applicant, though I've done well for myself in my circumstances, and a lot of my failure to do better is due to prioritizing volunteer and other work.
To put it crassly, if EA orgs can fully satisfy their staffing needs using recent, EA-aligned graduates of elite colleges, there is no point in me even applying.
The way it feels (when I'm feeling down) is that EA is not really intended for someone like me. The jobs are not there, and while I believe in and practice earning to give, you sometimes get the impression reading the boards that if you aren't a high enough earner, maybe even that isn't really worthwhile, since in an objective sense, it isn't high impact.
And that's fine. Maybe EA can get all it needs from those talent pools, and maybe the urgency of the moment is such that even the money I can give is not that important. Obviously, its feasible that's the case. But then, I'd like to know that, you know?
I do think some sort of moral-weights quizlet thing could be helpful for people to get to know their own values a bit better. GiveWell's models already do this but only for a narrow range of philanthropic endeavors relative to the OP (and they are actual weights for a model, not a pedagogical tool). To be clear, I do not think this would be very rigorous. As others have noted, the various areas are more-or-less speculative in their proposed effects and have more-or-less complete cost-evaluations. But it might help would-be donors to at least start thinking through their values and, based on their interests, it could then point them to the appropriate authorities.
As others have noted, I feel existing chatbots are pretty sufficient for simple search purposes (I found GiveWell through ChatGPT), and on the other hand, existing literature is probably better than any sort of fine-tuned LLM, IMO.
I have no idea what someone in this income-group would do. If I were in that class, being the respecter of expertise that I am, I would not be looking for a chatbot or a quizlet, and would seek out expert advice, so perhaps it is better to focus on getting these hypothetical expert-advisors more visibility?
IMO, you need to factor in the timeline in which you think AI safety is critical. While dentists may earn more, you are foregoing 4 years of income before you even start earning. You need to determine the rough break-even point at which you would cumulatively have earned more as a dentist, and therefore been able to give more.
If the critical phase of AI safety research precedes that date or is near it, then you may ironically contribute less in terms of marginal value of your giving.
Here's how I see things—
1. If AI advances so quickly that the earning power of CS collapses before you graduate, it is likely that the same will happen to dentists before you graduate from that program. But maybe the latter isn't true, in which case it could be reasonable to pursue dentistry.
2. If AI advances at a moderate pace, the breakeven logic I mentioned above probably means that you will have more impact by getting a moderately well-paying job sooner so that you can give during the critical period of AI development, since your giving would be largely deferred until after AGI if you went into dentistry.
3. If AI advances at a slow pace, then perhaps going into dentistry will ultimately allow you to contribute more.
One possibility you didn't mention, probably because it is unappealing to you—could you just major in dentistry? Then you would get the earning power and reduce the breakeven problem.
You are young. If I were you, out of these two options, I would just major in what I was interested in, and test out my talents. I would major in CS. If I performed exceptionally in AI safety stuff, I would try my hand at a direct career in it. If I didn't, I would focus on getting some other software job with high earning potential.
You are certainly correct that earning-to-give is the rational move when you consider that the constraints are often on resources to fund our goals, rather than on candidates willing to work on them professionally.