This is a special post for quick takes by jessica_mccurdy🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Quick take on Burnout

Note: I am obviously not an expert here nor do I have much first hand experience but I thought it could be useful for people I work with to know how I currently conceptualize burnout. I was then encouraged to post on the forum. This is based off around 4 cases of burnout that I have seen (at varying levels of proximity) and conversations with people who have seen significantly more.

  • Different Conceptions of Burnout
    • Basic conception that people often have: working too hard until energy is depleted.
    • Yes, working too hard can lead to exhaustion, but there's a difference between exhaustion and burnout.
    • Exhaustion vs. Burnout
      • Exhaustion:
        • Result of working very hard/ a lot of hours. Possibly including sleep deprivation or just brain fog.
        • Can often be resolved with a short break or vacation (eg: one week)
      • Burnout:
        • More pervasive and affects many areas of life/work. While it shared many physical symptoms of exhaustion, it is deeper.
        • A short vacation isn't sufficient to resolve it.
  • Core Feelings Tied to Burnout
    • Burnout is often tied to more core feelings like motivation, recognition, and feeling good about the work you're doing. It is more tied to your feelings of motivation and value than pure sleep deprivation or lack of rest. If someone is unsure of the value of their work and isn't super recognized, especially if they're also working really hard, that can really get into your brain and feels like a recipe for burnout.
  • Importance of Motivation
    • This is why I stress the value of motivation so much
    • Nuance: we should distinguish motivation from being overly enthusiastic about programs.
      • Jessica take is that we should have set times for re-evaluating the value of programs. Having set evaluation times helps reduce constant worry about program value but still maintains our ability to have a critical eye toward making sure we are having a large impact.
    • To some extent motivation is a very moldable thing and if you want to try and get more motivated, you can (but it often includes help from others like your manager and team)
  • Quick note
    • This isn’t me advocating for exhaustion because it isn’t burnout. I think exhaustion can be very counterproductive and leads to future hours being less productive. 

My main thing here is that I don’t think our LFG / work hard culture is the recipe for burnout. I think being uncertain of the value of our programs, facing many internal structural changes, and not being on top of motivation can be. This is part of why I am excited about the M&E work we are doing, people doing tour of duties, and people tracking motivation/actively valuing it. 

 

Jessica addition in Dec. 2024:

  • Getting sick more often than usual is an indicator to be aware of. This can lead to a spiral of “Get sick, get less done, get more stressed and feel like you are not doing good enough/not feeling good about your work, that stress causing you to get more sick/get sick again”


    (I will add for the forum that right now I am feeling really good about the value of our programs but its always good to be approaching programs critically to ensure you are having the most impact :) ) 

Relatedly, I think in many cases burnout is better conceptualised as depression (perhaps with a specific work-related etiology). 

Whether burnout is distinct from depression at all is a controversy within the literature:

I think that this has the practical implications that people suffering from burnout should at least consider whether they are depressed and consider treatment options with that in mind (e.g. antidepressants, therapy). 

There's a risk that the "burnout" framing limits the options people are considering (e.g. that they need rest / changes to their workplace). At the same time, there's a risk that people underestimate the extent to which environmental changes are relevant to their depression, so changing their work environment should also be considered if a person does conclude they might be depressed.

Published: Who gives? Characteristics of those who have taken the Giving What We Can pledge

The paper I worked on with Matti Wilks for my thesis was published! Lizka successfully did her job and convinced me to share it on the forum. 

I'm sharing this here, but I probably won't engage with it (or comments about it) too seriously as a heads up --- this was a project I worked on a few years ago and it's not super relevant to me anymore.

Curated and popular this week
 ·  · 7m read
 · 
Recently, @Lizka and @Ben_West🔸 published A shallow review of what transformative AI means for animal welfare. The main conclusion of this review was that animal welfare interventions should be heavily temporally discounted due to the possibility of transformative AI on short timelines. A reaction I had when reading this piece was that things tend to happen very slowly in animal agriculture, and even big wins like a corporate welfare commitment can take years before a specific animal is concretely better off. I therefore thought it might be useful to look at some of the main animal welfare interventions and assess how quickly they can help animals in the best case scenario.  A conclusion from this analysis is that animal interventions vary significantly from how quickly they start to have impact, with some interventions having impact almost immediately, some having predictable impact within some period of time (which can be up to a few years), and some having impact at some unspecified point in the future. Optimizing for speed to impact might be a new kind of frame under which animal advocates can evaluate and prioritize interventions.  These are just some preliminary thoughts that I wanted to get out there in the spirit of "shallow reviews" (also given the analysis itself is about the importance of speed). I'd welcome additional thoughts / feedback / pushback. Lowering / shifting meat demand Many animal welfare interventions achieve impact through lowering demand for animal products. In this category, I include things like starting a plant-based meat company, or doing vegan advocacy.  While it seems clearly possible to have wins in this area extremely quickly, there will often be a delay on having impact because of the structure of the supply chain for animal products.  For the simplest example, a beef cow in the US is generally 18-22 months old when they are slaughtered. This means that, to some extent, the beef supply in the US for the next 18-22 months h
 ·  · 6m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > How I decided what to say — and what not to I’m excited to share my TED talk. Here I want to share the story of how the talk came to be, and the three biggest decisions I struggled with in drafting it. The backstory Last fall, I posted on X about Trump’s new Secretary of Agriculture, Brooke Rollins, vowing to undo state bans on the sale of pork from crated pigs. I included an image of a pig in a crate. Liv Boeree, a poker champion and past TED speaker, saw that post and was haunted by it. She told me that she couldn’t get the image of the crated pig out of her head. She resolved that if she won prize money at her next poker tournament she’d give 20% of it away to help factory-farmed animals. She won $2.8 million. And she not only donated 20% of it, she also started posting to her many followers about factory farming, invited me on her podcast … and then invited me to speak at TED. (She was a guest curator at this year’s conference.) This was a huge opportunity. I don’t think the main TED stage has ever had a talk solely about factory farming before. (TED’s head Chris Anderson told me later that he regretted that TED hadn’t tackled the topic until now.) So I really didn’t want to mess it up. I knew what I wanted to convey: the moral urgency that we address factory farming. But I didn’t know how best to convey it. In particular, I struggled with three questions: how to talk about a moral atrocity, what my big idea would be, and what to ask for. Everyone likes an origin story. Thankfully my parents still had this stereotypically-New Zealand photo of me on our small sheep farm growing up. I was always more into the picnics than the farm work. Photo: Gilberto Tadday / TED. How to talk about a moral atrocity? Perha
 ·  · 50m read
 · 
In 2021, a circle of researchers left OpenAI, after a bitter dispute with their executives. They started a competing company, Anthropic, stating that they wanted to put safety first. The safety community responded with broad support. Thought leaders recommended engineers to apply, and allied billionaires invested.[1] Anthropic’s focus has shifted – from internal-only research and cautious demos of model safety and capabilities, toward commercialising models for Amazon and the military. Despite the shift, 80,000 Hours continues to recommend talented engineers to join Anthropic.[2] On the LessWrong forum, many authors continue to support safety work at Anthropic, but I also see side-conversations where people raise concerns about premature model releases and policy overreaches. So, a bunch of seemingly conflicting opinions about work by different Anthropic staff, and no overview. But the bigger problem is that we are not evaluating Anthropic on its original justification for existence. Did early researchers put safety first? And did their work set the right example to follow, raising the prospect of a ‘race to the top’? If yes, we should keep supporting Anthropic. Unfortunately, I argue, it’s a strong no.  From the get-go, these researchers acted in effect as moderate accelerationists. They picked courses of action that significantly sped up and/or locked in AI developments, while offering flawed rationales of improving safety.   Some limitations of this post: * I was motivated to write because I’m concerned about how contributions by safety folks to AGI labs have accelerated development, and want this to be discussed more. Anthropic staff already make cogent cases on the forum for how their work would improve safety. What is needed is a clear countercase. This is not a balanced analysis. * I skip many nuances. The conclusion seems roughly right though, because of overdetermination. Two courses of action – scaling GPT rapidly under a safety guise, starting a ‘