Today I talked with a friend who just started a new job at an organization focused on existential risk. Two pieces of advice I gave:

  • This is a fast-moving, ambitious organization where people have short AI timelines. It is not trying to guard you against burnout. You have to be in charge of deciding how much to guard yourself there. (It doesn't really matter which organization this is - if that description applies, the advice probably does too.)

  • This is a setting where work, housing, dating, friendship, and social scene are closely entwined for a lot of people. When you're deciding how entwined they should be, consider how past things have gone for you: if you've had acrimonious breakups or clashes with housemates or coworkers, probably err on the side of keeping things more separate. Also consider that people you're not very entwined with now might be different in six months or a year (when you end up working in the same chain of management, etc.)

Comments4


Sorted by Click to highlight new comments since:

It is not trying to guard you against burnout. You have to be in charge of deciding how much to guard yourself there. (It doesn't really matter which organization this is - if that description applies, the advice probably does too.)

This sounds fairly suboptimal to me, no? 🤔

  • even if you expect AGI in 5 years, taking care of the mental health of employees seems really useful and not significantly trading off against performance
  • I‘m worried if people who have a lot of responsibility over our future are not in their best mental health, seems like this will trade off against decision quality

Things like "do you work 40 vs. 60 hours a week" are not obvious choices here - longer hours probably increase risk of burnout, but you also get a lot more work done for as long as it lasts. In fields like corporate law or medical residency, people do long hours for years (though especially in medicine, there are definite quality vs quantity tradeoffs).

I agree that how long you work is not an obvious tradeoff. I was responding to the "[The EA org] is not trying to guard you against burnout" part. A rephrasing of that sentence might be "The EA org is doing nothing to prevent people from burning out", right?

I'm quite sceptical that an EA org is making the right call by spending zero effort preventing very common and devastating mental health issues like burnout? This doesn't have to mean to tell the employees to work less, but it might mean

  • managers should monitor mental health
  • telling employees to take a break when they are at risk of burnout
  • providing resources for mental health support, coachings that help with maximizing sustainable productivity

I'm sure they're not doing literally nothing to prevent burnout, but it's not a high priority for them, and when it trades off against something else (like taking a break, or using manager capacity on supporting mental health) I expect burnout prevention will often come behind other priorities.

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 14m read
 · 
As we mark one year since the launch of Mieux Donner, we wanted to share some reflections on our journey and our ongoing efforts to promote effective giving in France. Mieux Donner was founded through the Effective Incubation Programme by Ambitious Impact and Giving What We Can. TLDR  * Prioritisation is important. And when the path forward is unclear, trying a lot of different potential priorities with high productivity leads to better results than analysis paralysis. * Ask yourself what the purpose of your organisation is. If you are a mainly marketing/communication org, hire people from this sector (not engineers) and don’t be afraid to hire outside of EA. * Effective altruism ideas are less controversial than we imagined and affiliation has created no (or very little) push back * Hiring early has helped us move fast and is a good idea when you have a clear process and a lot of quality applicants Summary of our progress and activities in year 1 In January 2025, we set a new strategy with time allocation for our different activities. We set one clear goal - 1M€ in donations in 2025. To achieve this goal we decided: Our primary focus for 2025 is to grow our audience. We will experiment with a variety of projects to determine the most effective ways to grow our audience. Our core activities in 2025 will focus on high-impact fundraising and outreach efforts. The strategies where we plan to spend the most time are : * SEO content (most important) * UX Optimization of the website * Social Media ; Peer to Peer fundraising ; Leveraging our existing network The graphic below shows how we plan to spend our marketing time: We are also following partnership opportunities and advising a few high net worth individuals who reached out to us and who will donate by the end of the year. Results: one year of Mieux Donner On our initial funding proposal in June 2024, we wrote down where we wanted to be in one year. Let’s see how we fared: Meta Goals * Spendi