Mental health advocate and autistic nerd with lived experience. Working on my own models of mental health, especially around practical paths to happiness, critique of popular self-help & therapy, and neurodivergent mental health. 50% chance of pivoting to online coaching in 2025.
IG meme page: https://www.instagram.com/neurospicytakes/
I'm looking to:
I can help you by:
This is an important question, which I left out because my full answer is extremely nuanced and it isn't central to my intention for this post (to stimulate discussion about the mental health of the community).
Here's a brief version of my response:
A good maximizer would know to take mental health into account and be good at it. However, it's very difficult to guess and figure out what the needs and requirements are for good mental health. Good mental health needs more than "the minimum amount of self-care", and maximizers will always be considering whether they could be doing less self-care. I argue that maximization as a strategy will always be suboptimal when one of these two conditions are present (and I believe they often are): when self-care is less visible and measurable than the other parts of the maximization equation, and if one of the requirements for good mental health includes things that necessarily include not maximizing. For example: embracing failure and imperfection, trusting one's body, giving yourself permission to adjust your social/moral/financial obligations at any time, these are not compatible with any rationality-based maximization. (Wild thought: Maybe they could be compatible with "irrational maximization"?) I believe I can refute pretty much any angle resembling "but the maximizer could just bootstrap based on your criticism and be better/smarter about maximization", but there are too many forms of this to pre-emptively address here.
These two strategies are worlds apart, despite seeming like they have a common interest: treating self-care as a task necessary for impact vs treating impact as an important expression within self-care. I advocate for the second approach, and I believe that for some people, this second approach can lead to greater impact AND greater happiness.
Exploring what's helpful is definitely an interesting angle that generates ideas. One idea that comes to mind is how EA communicates around the Top Charities Fund, basically "let us do the heavy lifting and we'll do our best to figure out where your donations will have impact". This has two particular attributes that I like. Firstly it provides maximum ease for a reader to just accept a TLDR and feel good about their choice (and this is generally positive for a non-EA donator independent of how good or bad TCF's picks are). Secondly, I think the messaging is more neutral and a bit closer to invitational consent culture. Hardcore EA is more likely to imply that you "should" think and care about whether TCF is actually a good fund and decide for yourself, but the consent culture version might be psychologically beneficial to both EAs and non-EAs while achieving the same or better numeric outcomes.
Does anyone know of a low-hassle way to charge invoices for services but it's a third-party charity that gets paid? It could well be an EA charity if that makes it easy. I'm hoping for something slightly more structured than "I'm not receiving any pay for my services but I'm trusting you to donate X amount to this charity instead".
I used to frequently come across a certain acronym in EA, used in a context like "I'm working on ___" or "looking for other people who also use ___". I flagged it mentally as a curiosity to explore later, but ended up forgetting what the acronym was. I'm thinking it might be CFAR, which seems to have meant CFAR workshops? If so, 1) what happened to them, and 2) was it common for people to work through the material themselves, self-paced?
An example of invested but not attached: I'm investing time/money/energy into taking classes about subject X. I chose subject X because it could help me generate more value Y that I care about. But I'm not attached to getting good at X, I'm invested in the process of learning it.
I feel more confused after reading your other points. What is your definition of rationality? Is this definition also what EA/LW people usually mean? (If so, who introduces this definition?)
When you say rationally is "what gets you good performance", that seems like it could lead to arbitrary circular reasoning about what is and isn't rational. If I exaggerate this concern and define rationality as "what gets you the best life possible", that's not a helpful definition because it leads to the unfalsifiable claim that rationality is optimal while providing no practical insight.
I've seen EA writing (particularly about AI safety) that goes something like:
I know X and Y thought leaders in AI safety, they're exceptionally smart people with opinion A, so even though I personally think opinion B is more defensible, I also think I should be updating my natural independent opinion in the direction of A, because they're way smarter and more knowledgeable than me.
I'm struggling to see how this update strategy makes sense. It seems to have merit when X and Y know/understand things that literally no other expert knows, but aside from that, in all other scenarios that come to mind, it seems neutral at best, otherwise a worse strategy than totally disregarding the "thought leader status" of X and Y.
Am I missing something?
Two things:
1. I think of "Invested but not attached [to the outcome]" as a pareto-optimal strategy that is neither attached nor detached.
2. I disagree with the second to last paragraph, "Mud-dredging does improve your rationality, however. That's why betting works." I think that if you're escaping in the mountains, then it's true that coming down from the mountain will give you actual data and some degree of accountability. But it's not obvious to me that 1) mud-dredging increases rationality, 2) the kind of rationality that mud-dredging maybe increases is actually more beneficial than harmful in the long run in terms of performance. Furthermore, among all the frameworks out there in terms of mental health or productivity, I believe that creativity is almost universally valued as a thing to foster more than rationality, in terms of performance/success, so I'm curious about where you're coming from.
I hold the same view towards "non-naive" maximization being suboptimal for some people. Further clarification in my other comment.
I have concerns about the idea that a healthy-seeming maximizer can prove the point that maximization is safe. In mental health, we often come across "ticking time bomb" scenarios that I'm using as a sort of Pascal's mugging (except that there's plenty of knowledge and evidence that this mugging does in fact take place, and not uncommonly). What if someone just appears to be healthy and this appearance of being healthy is simply concealing and contributing to a serious emotional breakdown later in their life, potentially decades on? This process isn't a mysterious thing that comes without obvious signs, but what may be obvious to mental health professionals may not be obvious to EAs.
I don't reject the possibility that healthy maximizers can exist. (Potentially there is a common ground where a rationalist may describe a plausible strategy as maximization, and I, as a mental health advocate, would say it's not, and our disagreement in terminology is actually consistent with both our frameworks.) If EA continues to endorse maximizing, how about we at least do it in a way that doesn't directly align with known risks of ticking time bombs?