3 min read 5

56

Some kinds of mistakes are very visible/salient. Mistakes in execution are often salient. When you organize an event and very few people show up, it’s pretty clear that you didn’t do enough advertising, or that your advertising was ineffective, or the thing you were organizing an event around wasn’t interesting enough to other people in the first place (or something else, gotta hedge :D). Mistakes that are publicly visible (or at least visible to some others) are also (way) more salient. I have spread myself too thin, and overpromised and underdelivered - and have let others down. Other people knowing the mistakes I’ve made makes them quite salient.
 

Other kinds of mistakes are harder to notice, sometimes so much so that we don’t even realize they’re mistakes. These often relate to opportunity cost, which is the difference between the thing you did, and the best thing you could have done instead. When I waste a few (high energy/high-focus) hours, or sometimes a whole day (rip) on Youtube, the opportunity cost of what I could have done with that day is often not very salient. (This specifically is no longer an issue, thanks Cold Turkey!) It’s really sad to think about how much more good I could have accomplished with all the time I spent watching tennis and Youtube, or how much more effective I’d be if I’d sufficiently invested in my productivity habits and mental/physical health at a younger age. 
 

Strategic mistakes are often not salient (h/t Nate Thomas).  It might not be very clear that you’re making a mistake if you’re doing something really well, but are focusing on the wrong thing entirely - e.g. getting really good at performing surgeries for Kaposi’s sarcoma instead of optimizing education about HIV/AIDS for high risk groups (see here for more). (Yes, this is also an example of opportunity cost).
 

Doing things is hard. When you do things, you often subject yourself to lots of potential scrutiny. If you try and fail, your failure is salient, and often visible to (some subset of) the world. If you don’t try at all, no one blames you for not trying. I’d love for us to be more cognizant of strategic mistakes/opportunity cost/other low-salience things, and hold ourselves to higher standards. Though this post is focused on mistakes and correcting them, we shouldreward each other for pushing ourselves (which can look like running experiments to find out how to optimize our productivity, determining how much and what kinds of work we can do sustainably, getting coaching, etc), and for actually trying, even if we fail.
 

So how can we make these invisible mistakes more salient, and integrate these ideas into how we spend our time and make decisions?

  • Internalizing the concept of opportunity cost and the extreme importance of time has made the cost of wasting my time much more salient to me. Estimating (with lots of uncertainty) how much good I can do per hour (e.g. in terms of lives saved, OpenPhil dollars spent, etc) has had a similar effect.
  • Thomas Kwa’s post Effectiveness is a Conjunction of Multipliers makes the importance of strategic mistakes quite clear. Brief summary: Not considering one 10x multiplier, even when I correctly identify five others, means only having 10% of the impact I could’ve had if I had considered it. Perhaps the most important examples of how thinking about strategy has been important for me have been:
    • Reading, talking to people, and thinking a lot about cause prioritization, and Regularly asking myself “What might be 10 (or 1000) times more impactful than what I currently work on?”. 
  • Figure out ways to spend your time and resources more effectively. Figure out which kinds of work you can excel at (I recommend checking out this post for ideas). Figure out what productive activities you can do while tired (e.g. some people get energy from and enjoy talking to people, which can be quite valuable, or naps/walks instead of internet binges). Figure out which of the things you spend time on can be outsourced, or take much less time (e.g. Instacart vs. buying groceries, delivery vs. cooking, Ubers vs. walking/public transit, etc).

Thomas’ post, and Alkjash’s Pain is not the unit of Effort are also helpful for reminding myself that making the most of my time is not the same as working myself to the brink of burnout. I really like Alkjash’s definition of “actually trying” from the aforementioned post: “​​using all the tools at your disposal, getting creative, throwing money at the problem to make it go away” and importantly, not “subjecting yourself to as much suffering as you’re able to endure to prove to yourself and others that you’re a virtuous studious workhorse.”


Doing good sustainably is important, but given the stakes of the problems we’re trying to solve, it really seems critically important to push ourselves to our (long-term, sustainable) limits, and be as strategic as we can to do the most good.


 

Comments5


Sorted by Click to highlight new comments since:
Caro
12
0
0

I also think that having friends, colleagues, and coaches who are very honest with you is extremely important because invisible mistakes are sometimes especially hard to spot. Maybe you got a very fancy-looking new position, but it's way "less good" than you doing a more hidden but higher impact job. The rest of the world will tell you that it's great; so you need transparent friends and be in a sufficiently good mental space to receive this feedback.

Reviewing your decision on your own every X months and trying to make predictions about what "really good impact" looks like may be a good idea.

Thanks for synthesizing a core point that several recent posts have been getting at! I especially want to highlight the importance of creating a community that is capable of institutionally recognizing + rewarding + supporting failure.

What can the EA community do to reward people who fail? And - equally important - how can the community support people who fail? Failing is hard, in no small part because it's possible that failure entails real net negative consequences, and that's emotionally challenging to handle.

With a number of recent posts around failure transparency (one, two, three), it seems like the climate is ripe for someone to come up with a starting point.

I appreciate a lot what this post is driving at.

Adding on to what you're saying, even instrumentally speaking, a human working so hard they are edging (or past) burnout, is not nearly as effective in being able to assess their own landscape (i.e. see less visible mistakes). Especially because the stakes of the problems we are trying to solve can be so extraordinarily high, doing good sustainably becomes so critical as a basic practice. Helping each other do better can be a thing we learn to do better too.

A culture that emphasizes being a workhorse can be liable to create the "invisible mistake" of incentivizing people within that culture to make more "invisible mistakes". The more we can point this out to each other (like what this post is doing), the more we can preserve/restore our capacity to zoom in and out flexibly and do our work well!

I recently listened to the podcast Life Kit on NPR in which Dr. Anna Lembke said that going cold turkey from an addiction (if that is safe) is an effective way of reorganizing the brain. She said this is true because our brains have evolved in environments with much scarcer resources than we have today and so are being overloaded with too much dopamine and pleasure by everything we have around us nowadays.

Daydreaming itself may not be counterproductive. Daydreaming can be a way to adaptively take a break. It may enable more productive work by avoiding burnout. 

I constantly feel attuned to how well my time is  being spent. Because there are so many things to keep track of during the day, and I feel my consciousness is not at its peak all day, I apprehend misuses of my time snowballing out of my control.

Spotting an invisible mistake might be more advantageous than realizing a visible mistake because spotting an invisible mistake entails intrinsic motivation, while realizing a visible mistake might entail public pressure which can lessen the effectiveness of outcomes (by involving shame, tendency to conform, etc.).

An invisible mistake that I have done recently is not utilizing a means of doing something that is obvious and the easier/faster/more efficient means.

This post made me think about the idea that we are unknowingly committing a moral catastrophe. Invisible mistakes would seem to me to be what would be the support structure of a moral catastrophe taking place. Because they would be invisible to society, they would have free reign to move society in this or that direction. In that case, focusing on invisible mistakes should probably have much more priority than visible mistakes.


 

Other invisible mistakes I make are poor planning (which involves a vague vision of my plan which doesn't account for everything, which can lead to it not turning out exactly as I expected to or failing in some way in the long-term after it is implemented because of factors that became relevant later on), overestimating my endurance for some manual and automatic task (such as driving somewhere) or my ability to tolerate a certain condition (like going without food for a while), and overworking myself at the unintended expense of accuracy.

 


 

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 1m read
 · 
Hi everyone! I’m Caitlin, and I’ve just kicked off a 6-month, full-time career-transition grant to dive deep into AI policy and risk. You can learn more about my work here. What I’m Building I’m launching a TikTok and Instagram channel, @AICuriousGirl, to document my journey as I explore AI governance, misalignment, and the more tangible risks like job displacement and misuse. My goal is to strike a tone of skeptical optimism[1], acknowledging the risks of AI while finding ways to mitigate them. How You Can Help My friends and roommates have already given me invaluable feedback and would be thrilled to hear I'm not just relying on them anymore to be reviewers. I’m now seeking: * Technical and policy experts or other communicators who can * Volunteer 10-15 minutes/week to review draft videos (1-2 minutes in length) and share quick thoughts on: * Clarity * Accuracy * Suggestions for tighter storytelling First Drafts Below are links to my first two episodes. Your early feedback will shape both my content style and how I break down complex ideas into 1- to 2-minute TikToks. 1. Episode 1: What is this channel? 2. Episode 2: What jobs will be left? (Please note: I’ll go into misuse and misalignment scenarios in future videos.) Why TikTok? Short-form video platforms are where many non-technical audiences spend their time, and I’m curious whether they can be a vehicle for thoughtful discussion about AI policy.   If you’re interested, please reply below or DM me, and thank you in advance for lending your expertise! — Caitlin   1. ^ This phrase is not good, please help me think of a better one and I will buy you a virtual coffee or sth.