This is a special post for quick takes by Peter Wildeford. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.

I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).

I see many EAs erroneously try to go into research and stick to research despite having very clear strengths on the operational side and insist that they shouldn't do operations work unless they clearly fail at research first.

I've personally felt this at times where I started my career very oriented towards research, was honestly only average or even below-average at it, and then switched into management, which I think has been much higher impact (and likely counterfactually generated at least a dozen or more researchers).

For operations roles, and focusing on impact (rather than status), I notice that your view contrasts markedly with @abrahamrowe’s in his recent ‘Reflections on a decade of trying to have an impact’ post:

Impact Through Operations

  • I don’t really think my ops work is particularly impactful, because I think ops staff are relatively easy to hire for compared to other roles. However I have spent a lot of my time in EA doing ops work.
    • I was RP’s COO for 4 years, overseeing its non-research work (fiscal sponsorship, finance, HR, communications, fundraising, etc), and helping the organization grow from around 10 to over 100 staff within its legal umbrella.
    • Worked on several advising and consulting projects for animal welfare and AI organizations
      • I think the advising work is likely the most impactful ops work I’ve done, though I overall don’t know if I think ops is particularly impactful.

I see both Abraham and yourself as strong thinkers with expertise in this area, which makes me curious about the apparent disagreement. Meanwhile, the ‘correct’ answer to the question of an ops role’s impact relative to that of a research role should presumably inform many EAs’ career decisions, which makes the disagreement here pretty consequential. I wonder if getting to the ground truth of the matter is tractable? (I’m not sure how best to operationalize the disagreement / one’s starting point on the matter, but maybe something like “On the current margin, I believe that the ratio of early-career EAs aiming for operations vs. research roles should be [number]:1.”)

(I understand that you and Abraham overlapped for multiple years at the same org—Rethink Priorities—which makes me all the more curious about how you appear to have reached fairly opposite conclusions.)

Two caveats on my view:

  • I think I'm skeptical of my own impact in ops roles, but it seems likely that senior roles are harder to hire for generally, which might generally mean taking one could be more impactful (if you're good at it).
  • I think many other "doer" careers that aren't ops are very impactful in expectation — in particular founding new organizations (if done well or in an important and neglected area). I also think work like being a programs staff member at a non-research org is very much in the "doer" direction, and could be higher impact than ops or many research roles.

Also, I think our views as expressed here aren't exactly opposite — I think my work in ops has had relatively little impact ex post, but that's slightly different than thinking ops careers won't have impact in expectation (though I think I lean fairly heavily in that direction too, just due to the number of qualified candidates for many ops roles).

Overall, I suspect Peter and I don't disagree a ton (though haven't talked with him about it) on any of this, and I agree with his overall assertion (more people should consider "doer" careers over research careers), I think I just also think that more people should consider earning to give over any direct work.

Also, Peter hires for tons of research roles, and I hire for tons of ops roles, so maybe this is also just us having siloed perspectives on the spaces we work in?

How does a programs staff role differ from an ops role?

Is there a proposed/proven way of coordinating on the prioritization?

Without a good feedback loop I can imagine the majority of the people just jump on the same path which could then run into diminishing returns if there isn’t sufficient capacity.

It would be intersting to see at least the number of people at different career stages on a given path. I assume some data should be available from regular surveys. And maybe also some estimates on the capacity of different paths.

And I assume the career coaching services likely have an even more detailed picture including missing talent/skills/experience that they can utilize for more personalized advice.

I don't know the true answer to this confusion, but I have some rough (untested, and possibly untestable) hypothesis I can share:

  • It is really hard to estimate counterfactual scenarios. If you are the project manager (or head of people, or finance lead, or COO), it is really hard to have a good sense of how much better you are than the next-best candidate. Performance in general is hard to measure, but trying to estimate performance of a hypothetical other individual that you have never met strikes me as very challenging. Even if we were to survey 100 people in similar roles at other orgs, the context-specific nature of performance implies that we shouldn't be too confident about predicting how a person should perform at Org A simply from knowing their performance at Org B.
  • I'm not quite sure how to phrase this, but it might be something like "the impact of operations work has high variance," or maybe "good operations results in limiting the downside a lot but does relatively little to increase the upside." Taking a very simplistic example of accounting, if our org has bad accounting them we don't know how much money we have, we don't keep track of accounts payable, and have general administrative sloppiness relating to money which makes decision-making hard. If we have very good accounting, then we have clarity about where our funds are flowing, what we own, and what we owe. Those upsides are nice, but they aren't as impactful (in a positive way) as the downsides are impactful (in a negative way). Phrased in a different way: many operations roles are a cost center rather than a profit center (although this will certainly vary depending on the role and the organization).
  • It might just be a thing of marginal value, with non-operations roles being more impactful (overall, in general), but we still need more good operations people than we currently have.

I have a lot of uncertainty as to the reality of this, but I'm always interested in reading thoughts from people about these issues.

Quick response - the way that I reconcile this is that these differences were probably just due to context and competence interactions. Maybe you could call it comparative advantage fluctuations over time?

There probably no reasonable claim that advising is generally higher impact than Ops or vice versa. It will depend on the individual and the context. At some times, some people are going to be able to have much higher impact doing ops than advising, and vice versa.

From a personal perspective my advising opportunities very greatly. There are times where most of my impact comes from helping somebody else because I have been put in contact with them and I happen to have useful things to offer. There are also times where the most obviously counteractually impactful thing for me to do is to do research or some sort of operations work to enable other researchers. Both of these activities kind of have lumpy impact distributions because they only occur when certain rare criteria are collectively met.

In this case Abraham may have had much better advising opportunities relative to operations opportunities while this was not true for Peter.

One question I often grapple with is the true benefit of having EAs fill certain roles, particularly compared to non-EAs. It would be valuable to see an analysis—perhaps there’s something like this on 80,000 Hours—of the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact. If an EA doesn’t outperform the counterfactual non-EA hire, their impact is neutralized. This is why I believe that earning to give should be a strong default for many EAs. If they choose a different path, they should consider whether:

  1. They are providing specialized and scarce labor in a high-impact area where their contribution is genuinely advancing the field. This seems more applicable in specialized research than in general management or operations.
  2. They are exceptionally competent, yet the market might not compensate them adequately, thus allowing highly effective organizations to benefit from their undercompensated talent.

I tend to agree more with you on the "doer" aspect—EAs who independently seek out opportunities to improve the world and act on these insights often have a significant impact.

This (a strong default towards earn to give) neglects the importance of value alignment for many EA-aligned orgs.

Having an org that is focused, rather than pulled five different directions is invaluable.

I didn't neglect it - I specifically raised the question of in what conditions EAs occupying roles within orgs vs non-EAs adds substantial value. You assume that having EAs in (all?) roles is critical to having a "focused" org. I think this assumption warrants scrutiny, and there may be many roles in orgs for which "identifying as an EA" may not be important and that using it as a requirement could result in neglecting a valuable talent pool.

Additionally, a much wider pool of people could align with the specific mission of an org that don't identify as EA.

Do you have any ideas or suggestions (even rough thoughts) regarding how to make this change,  or for interventions that would nudge peoples' behavior?

Off the top of my head: A subsidized bootcamp on core operations skills? Getting more EAG speakers/sessions focused on operations-type topics? Various respected and well-known EAs publicly stating that Operations is important and valuable? A syllabus (readings, MOOCs, tutorials) that people can work their way through independently?

I have previously suggested a new podcast that features much more "in the trenches" people than currently is the case with e.g. 80k podcast, FLI podcast, etc. While listening to edge researchers is more fun that listening to how someone implemented Asana in an efficient way, I think one can make an "in the trenches" podcast equally, if not more interesting by telling personal stories of challenges, perseverance and mental health. One example of such is Joey at AIM - from the few times I heard him talk he seems to live an unusually interesting life. I also think a lot of ops people have really cool stories to tell, like people into fish welfare poking around at Greek aquaculture installations trying to get to know their "target market". There must be a ton of good "stories from the field" out there. 

Have you listened to 80k Actually After Hours/Off the Clock? This is close to what I was aiming for, though I think we still skew a bit more abstract. 

Yes it is good, but I feel like it is more of unstructured conversation and more about ideas than lived experiences. So I am thinking a bit more prepared, perhaps trying to get some narrative arcs with the struggle, the battle, the victory (or defeat!) and then the epilogue. I mean what was super interesting (and shocking in a negative way!) to listen to is "Going Infinite" - I mean it is essentially an EA story. So rich, so gripping and compelling and so dramatic. I think something only 10% as dramatic would be interesting to listen to and there must be stories out there. I think the challenge will be to find the overlap between "juicy stories" and people being willing to tell them - often I think the most interesting stuff is stuff people are concerned about being public! But I guess it also needs to be something that make people think ops work sounds interesting but this could also be examples of how gravely things can go wrong without ops - something I think is a lens one could view the FTX scandal through, for example.

I once saw a post https://www.alignmentforum.org/posts/ho63vCb2MNFijinzY/agi-safety-career-advice that is specific to AI, detailed the directions within both research and governance, and found it useful. Maybe some general education post (but on more general EA topics) like this would be very helpful.

I guess this is the same dynamic as why movie and sports stars are high status in society: they are highly visible compared to more valuable members of society (and more entertaining to watch). We don't really see much of highly skilled operations people compared to researchers

I'm reminded about The Innovation Delusion (which I've mentioned a bit previously on the EA Forum: 1, 2), and ideas of credit, visibility, absence blindness, and maintenance work. An example of Thomas Edison is good enough that I will copy and paste it here:

Edison—widely celebrated as the inventor of the lightbulb, among many other things—is a good example. Edison did not toil alone in his Menlo Park laboratory; rather, he employed a staff of several dozen men who worked as machinists, ran experiments, researched patents, sketched designs, and kept careful records in notebooks. Teams of Irish and African American servants maintained their homes and boardinghouses. Menlo Park also had a boardinghouse for the workers, where Mrs. Sarah Jordan, her daughter Ida, and a domestic servant named Kate Williams cooked for the inventors and provided a clean and comfortable dwelling. But you won’t see any of those people in the iconic images of Edison posing with his lightbulb.

If I imagine being in a hypothetical role that is analogous to Mrs. Sarah Jordan's role, in which I support other people to accomplish things, am I okay with not getting any credit? Well, like everyone else I have ego and I would like the respect and approval of others. But I guess if I am well-compensated and my colleagues understand how my work contributes to our team's success I would be okay with somebody else being the public face and getting the book deals and getting the majority of the credit. How did senior people at Apple feel about Steve Jobs being so idolized in the public eye? I don't care too much if people in general don't acknowledge my work, as long as the people I care about most acknowledge it.

Of course it would be a lot nicer to be acknowledged widely, but that is generally not how we function. Most of us (unless we specifically investigate how people accomplished things) don't know who Michael Phelps's nutritionist was, nor do we know who taught Bill Gates about computers, nor who Magnus Carlsen's training partners are, nor who Oscar Wilde bounced around ideas with and got feedback from. I think there might be something about replaceability as well. Maybe there are hundreds of different people who could be (for example) a very good nutritionist for Michael Phelps or who could help Magnus Carlsen train, but there are only a handful of people who could be a world-class swimmer or a world class-chess player on that level?

The issue with support roles is that it's often difficult to assess when someone in that position truly makes a counterfactual difference. These roles can be essential but not always obviously irreplaceable. In contrast, it's much easier to argue that without the initiator or visionary, the program might never have succeeded in the first place (or at least might have been delayed significantly). Similarly, funders who provide critical resources—especially when alternative funding isn't available—may also be in a position where their absence would mean failure.

This perspective challenges a more egalitarian view of credit distribution. It suggests that while support roles are crucial, it's often the key figures—initiators, visionaries, and funders—who are more irreplaceable, and thus more deserving of disproportionate recognition. This may be controversial, but it reflects the reality that some contributions, particularly at the outset, might make all the difference in whether a project can succeed at all.

Which of these two things do you mean?

  • operations/management/doer careers should be higher status than they currently are within EA
  • operations/management/doer careers should be higher status than research careers within EA

I do think that the marginal good of additional researchers, journalists, content creators and etc isn't exactly as high as it is thought to be. But there's an obvious rational-actor (collective action problem?) explanation: other people may not be needed, but me, with my idiosyncratic ideologies? Yep!

This also entails that the less representative an individual is of the general movement, the higher the marginal value for him in particular to choose a research role.

I suspect it varies by cause area. In AI Safety, the pool of people who can do useful research is smaller than the pool of people who could do good ops work (which is more likely to involve EA’s who prefer a different cause area, but are happy to just have an EA ops job).

Just wanted to quickly say that I hold a similar opinion to the top paragraph and have had similar experiences on terms of where I felt I had most impact.

I think that the choice of whether to be a researcher or do operations is very context dependant.

If there are no other researchers doing something important your competitive advantage may be to do some research because that will probably outperform the counterfactual (no research) and may also catalyze interest and action within that research domain.

However if there are a lot of established organizations and experienced researchers, or just researchers who are more naturally skilled than you already involved in the research domain, then you can often have a more significant impact by helping to support those researchers or attract new researchers.

One way to navigate this is to have a what I call a research hybrid role where you work as researcher but allocate some flexible amount of time to more operations / field building activities depending on what seems most valuable.

Did the research experience help you be a better manager and operator from within research organizations?

I feel like getting an understanding by doing some research could be helpful and probably you could gain generalizable/transferable skills but I’m just speculating here.

I'm planning to fail as a researcher in the next few years and my reasons are simple:

  1. It opens up opportunities for impactful positions outside of EA, in academia but also everywhere else. I think the most impactful act in the average EA's life could be channeling EA ideas further into society and I tend to think academia roles rapidly amplify this effect, as you easily attract audience, communicators & decision-makers and as you get into contact with ambitious young people.
  2. Creating one more EA position seems far more impactful than being, let's say, 10 % more impactful than someone else in an EA role, which seems achievably ambitious. And it's hard for me to see many other ways to get paid by non-EAs for directly valuable work - but perhaps I'm missing something?
  3. It seems hard to try to fail in ops first, as I've heard that it's hard to get back into academia after a break.

I think it might be fine if people have genuine interest in research (though had to be intrinsic motivation), which will make their learning fast with more devoted energy. But overall generally I see a lot of value in operations/management/application work, as it gives people opportunities to learn how to land research into real impacts, and how tricky sometimes real world or applications can be.

This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.

not sure if such a study would naturally also be helpful to potential attackers, perhaps even more helpful to attackers than defenders, so might need to be careful about whether / how you disseminate the information

My sense is that 100 is an underestimate for the number of OS libraries as important as that one. But I'm not sure if the correct number is 1k, 10k or 100k.

That said, this is a nice project, if you have a budget it shouldn't be hard to find one or a few OS enthusiasts to delegate this to.

Relevant XKCD comic.

To further comment, this seems like it might be an intractable task, as the term "dependency hell" kind of implies. You'd have to scrap likely all of GitHub and calculate what libraries are used most frequently in all projects to get an accurate assessment. Then it's not clear to me how you'd identify their level of resourcing. Number of contributors? Frequency of commits?

Also, with your example of the XZ attack, it's not even clear who made the attack. If you suspect it was, say, the NSA, would you want to thwart them if their purpose was to protect American interests? (I'm assuming you're pro-American) Things like zero-days are frequently used by various state actors, and it's a morally grey question whether or not those uses are justified.

I also, as a comp sci and programmer, have doubts you'd ever be able to 100% prevent the risk of zero-days or something like the XZ attack from happening in open source code. Given how common zero-days seem to be, I suspect there are many in existing open source work that still haven't been discovered, and that XZ was just a rare exception where someone was caught. 

Yes, hardening these systems might somewhat mitigate the risk, but I wouldn't know how to evaluate how effective such an intervention would be, or even, how you'd harden them exactly. Even if you identify the at-risk projects, you'd need to do something about them. Would you hire software engineers to shore up the weaker projects? Given the cost of competent SWEs these days, that seems potentially expensive, and could compete for funding with actual AI safety work.

The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.

I wonder if anyone else will getting a thinly veiled counterpart -- given that the lead character of the show seems somewhat based on MacKenzie Scott, this seems to be maybe a thing for the show.

If we are taking Transformative AI (TAI) to be creating a transformation at the scale of the industrial revolution ... has anyone thought about what "aligning" the actual 1760-1820 industrial revolution might've looked like or what it could've meant for someone living in 1720 to work to ensure that the 1760-1820 industrial revolution was beneficial instead of harmful to humanity?

I guess the analogy might break down though given that the industrial revolution was still well within human control but TAI might easily not be, or that TAI might involve more discrete/fast/discontinuous takeoffs whereas the industrial revolution was rather slow/continuous, or at least slow/continuous enough that we'd expect humans born in 1740 to reasonably adapt to the new change in progress without being too bewildered.

This is similar to, but I think still a bit distinct from, asking the question of "what would a longtermist EA in the 1600s have done?" ...A question I still think is interesting but many EAs I know are not all that interested, probably because our time periods are just too disanalogous.

Some people at FHI have had random conversations about this, but I don't think any serious work has been done to address the question.

Curated and popular this week
Relevant opportunities