Hide table of contents

If you’re interested in having a meaningful EA career but your experience doesn’t match the types of jobs that the typical white collar, intellectual EA community leans towards, then you’re just like me. 

I have been earning to give as a nuclear power plant operator in Southern Maryland for the past few years, and I think it’s a great opportunity for other EA’s who want to make a difference but don’t have a PhD in philosophy or public policy.

Additionally, I have personal sway with Constellation Energy’s Calvert Cliffs plant, so I can influence the hiring process to help any interested applicants.

Here are a few reasons that I think this is such an ideal Earn to Give career:

  • A high income job in a low cost of living area means you will be able to donate a significant portion of your paychecks and still live comfortably.
  • The work supports renewable, clean energy which is a socially positive career choice
  • Double your impact with Constellation matching up to $10,000 of donations
  • The power plant is an hour away from Washington DC, which has a thriving EA Community
  • If you are interested in career growth, there are opportunities so you can gain extra skills and donate more!

Nuclear Power does include some specialized knowledge and skills, but you may be surprised how qualified you already are. Degree requirements vary significantly by site, but lots of places only require a high school degree. 

I have ~7 years of experience working on nuclear power plants in the navy and civilian worlds and I am incredibly passionate about getting others on board. Please reach out to me if you have any interest at all. I am happy to provide tutoring and nuclear-specific career coaching. I can help train and prepare you for any nuclear power position that you may be interested in. 

Pros:

  • Great money. Starting pay (at my facility) is $54 per hour. With overtime built into the schedule, this means that starting you’ll be making about $140,000 per year (pre-tax) without picking up additional days. If you want to pick up additional days, that is an option. (I made 170k last year)
  • Good benefits: Healthcare, dental, profit sharing, employee stock purchase plan, etc.
  • Donation matching up to $10,000 (on payroll donations)
  • No college degree or past experience required at some locations (though it does help.)
  • Defined career progression (first three levels are very straightforward)
  • Non-standard schedule means you can get longer periods of time off
  • No travel required.
  • I am willing to help you study before you start, so you can cruise through some of the initial examinations. 

Cons: 

  • The biggest downside is the schedule. The schedule can be hard, it’s rotating shift work with 12 hour shifts. It is still doable to fulfill family obligations, but it is definitely harder than working Monday-Friday 9am to 5pm.
  • No drugs. If you enjoy recreational marijuana (or other, stronger recreational drugs), then this isn’t a good fit. Operators at nuclear power plants get tested for drugs frequently.
  • Outages occur once a year and involve a month of “4 days on, 1 day off” schedule.
  • Requires a certain amount of basic knowledge to start (best measured by the POSS/BMST, see below)
  • Classes start only once every 8 months, so it may not be good timing for your life.
  • It is somewhat physically demanding. It requires a lot of walking, manipulating machine parts, moving things, etc. You don’t need to be able to win a Crossfit competition, but you do need a certain minimum level of fitness.
  • The work can be somewhat mundane. Your goal is to maintain operations and make sure things are running smoothly, 

Next Steps:

Let’s chat! If you’d like to ask some questions or just hear more about my anecdotal experience, I would love to connect.

The practice exams are found here, and you can log in with “firstenergy” as both the username and the password. If you take the POSS and/or BMST and get around 60% or above, I can probably help you get the rest of the way there. 

The Calvert Cliffs position with Constellation is open now!

More Information:

Nuclear Equipment Operator in Oswego, New York | Constellation Energy Generation, LLC. 

Auxiliary Operator Trainee in LUSBY, Maryland - Constellation Energy  

See: EA for dumb people? as some of my inspiration for offering up my help! 
 

128

0
0
11
2

Reactions

0
0
11
2

More posts like this

Comments7


Sorted by Click to highlight new comments since:

If you're open to disclosing this, how many people reached out to you about the position? I'm curious about the Forum's overall level of appetite for ETG-oriented posts like this, which I think could be written for many other jobs too!

I can disclose that at least one person did reach out (me). I'm machinist/mechanic/technician/generally hands on person and this post made me feel particularly seen, enough to finally quit lurking and make an account.

Outstanding! If you end up deciding to try for a nuclear job, I wish you the best of luck.

Not many people! 3 people have reached out including Pogden, but no one has fully committed! 

I intended this post to be more of a standing offer though, I looked for something like it when I got out of the navy so I thought it would be good to turn around and offer help to the next folks! 

On top of mentioning a specific opportunity, I think this post makes a great case in general for considering work like this (great wage & benefits, little experience necessary, somewhat mundane, shiftwork). I do feel a bit uncomfortable about the part where you mention using personal sway to influence the hiring process though, as this could undermine fair hiring practices, but I could be overreacting. 

Yeah it’s definitely something I thought about how to explain, I wasn’t sure how to do so succinctly so I kinda just cut the section. 

I’m not willing to recommend people who are unqualified, but I am trying to help people study and prepare for the job, which makes them a more qualified candidate generally! 

I can also pass along a resume and help people prepare for the interview. I’m pretty respected (I hope!) so my testimony as to your capability has some good weight. 
 

I think those things are normal, I’m distinctly aware of not violating norms for this job (because it’s nuclear power they check for stuff like that!) 

Thanks for bringing it up, I wasn’t sure about how to present the section!

Just wanted to say how much I appreciate you sharing this! 
This seems like a great way to ETG without highly specialised skills. If I had come across this a few years ago, I would have definitely considered it.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co