Bio

Participation
4

Mental health advocate and autistic nerd with lived experience. Working on my own models of mental health, especially around practical paths to happiness, critique of popular self-help & therapy, and neurodivergent mental health. 50% chance of pivoting to online coaching in 2025.

IG meme page: https://www.instagram.com/neurospicytakes/

How others can help me

I'm looking to:

  • (Virtually) meet fellow neurodivergent or creative people
  • Interview people working within mental health
  • Share personal stories and perspectives about happiness

How I can help others

I can help you by:

  • Troubleshooting a barrier to progress in almost any domain
  • Creative brainstorming or giving feedback
  • Giving a talk on specific topics within mental health, such as happiness or self-care

Comments
38

Answer by VictorW3
0
0
1

I love NVC for this. Just to pick one example, instead of expressing moral judgments on actions and decisions as bad or wrong (which can come across as judgmental and put people off of whatever preference you wanted to communicate), making it clear what your value preference is. E.g. rather than saying “violence is wrong,” we might say “I value the resolution of conflicts through safe and peaceful means.”

Another concept I love is based on consent culture applied to information/discussion. Would you like to hear more about X? Are you open to hearing feedback on Y? Discussing Z while I play devil's advocate? When I receive unsolicited advice and "impact interrogation" at EAGx events (pretty much always during ad-hoc or speed meeting convos), it can come across as adversarial and makes me feel unsafe at those conferences.

I hold the same view towards "non-naive" maximization being suboptimal for some people. Further clarification in my other comment.

I have concerns about the idea that a healthy-seeming maximizer can prove the point that maximization is safe. In mental health, we often come across "ticking time bomb" scenarios that I'm using as a sort of Pascal's mugging (except that there's plenty of knowledge and evidence that this mugging does in fact take place, and not uncommonly). What if someone just appears to be healthy and this appearance of being healthy is simply concealing and contributing to a serious emotional breakdown later in their life, potentially decades on? This process isn't a mysterious thing that comes without obvious signs, but what may be obvious to mental health professionals may not be obvious to EAs.

I don't reject the possibility that healthy maximizers can exist. (Potentially there is a common ground where a rationalist may describe a plausible strategy as maximization, and I, as a mental health advocate, would say it's not, and our disagreement in terminology is actually consistent with both our frameworks.) If EA continues to endorse maximizing, how about we at least do it in a way that doesn't directly align with known risks of ticking time bombs?

Longer quotes like these are narrative descriptions of the types of things I see and hear. Do you have any ideas on how to distinguish this from word-for-word quotations?

This is an important question, which I left out because my full answer is extremely nuanced and it isn't central to my intention for this post (to stimulate discussion about the mental health of the community).

Here's a brief version of my response:

A good maximizer would know to take mental health into account and be good at it. However, it's very difficult to guess and figure out what the needs and requirements are for good mental health. Good mental health needs more than "the minimum amount of self-care", and maximizers will always be considering whether they could be doing less self-care. I argue that maximization as a strategy will always be suboptimal when one of these two conditions are present (and I believe they often are): when self-care is less visible and measurable than the other parts of the maximization equation, and if one of the requirements for good mental health includes things that necessarily include not maximizing. For example: embracing failure and imperfection, trusting one's body, giving yourself permission to adjust your social/moral/financial obligations at any time, these are not compatible with any rationality-based maximization. (Wild thought: Maybe they could be compatible with "irrational maximization"?) I believe I can refute pretty much any angle resembling "but the maximizer could just bootstrap based on your criticism and be better/smarter about maximization", but there are too many forms of this to pre-emptively address here.

These two strategies are worlds apart, despite seeming like they have a common interest: treating self-care as a task necessary for impact vs treating impact as an important expression within self-care. I advocate for the second approach, and I believe that for some people, this second approach can lead to greater impact AND greater happiness.

Exploring what's helpful is definitely an interesting angle that generates ideas. One idea that comes to mind is how EA communicates around the Top Charities Fund, basically "let us do the heavy lifting and we'll do our best to figure out where your donations will have impact". This has two particular attributes that I like. Firstly it provides maximum ease for a reader to just accept a TLDR and feel good about their choice (and this is generally positive for a non-EA donator independent of how good or bad TCF's picks are). Secondly, I think the messaging is more neutral and a bit closer to invitational consent culture. Hardcore EA is more likely to imply that you "should" think and care about whether TCF is actually a good fund and decide for yourself, but the consent culture version might be psychologically beneficial to both EAs and non-EAs while achieving the same or better numeric outcomes.

Does anyone know of a low-hassle way to charge invoices for services but it's a third-party charity that gets paid? It could well be an EA charity if that makes it easy. I'm hoping for something slightly more structured than "I'm not receiving any pay for my services but I'm trusting you to donate X amount to this charity instead".

I used to frequently come across a certain acronym in EA, used in a context like "I'm working on ___" or "looking for other people who also use ___". I flagged it mentally as a curiosity to explore later, but ended up forgetting what the acronym was. I'm thinking it might be CFAR, which seems to have meant CFAR workshops? If so, 1) what happened to them, and 2) was it common for people to work through the material themselves, self-paced?

I identify as an anti-credentialist in the sense that I believe ideas can (under ideal circumstances) be considered on merit alone, regardless of how unreliable or bad the source of the idea is. Isn't credentialism basically a form of ad hominem attack?

An example of invested but not attached: I'm investing time/money/energy into taking classes about subject X. I chose subject X because it could help me generate more value Y that I care about. But I'm not attached to getting good at X, I'm invested in the process of learning it.

I feel more confused after reading your other points. What is your definition of rationality? Is this definition also what EA/LW people usually mean? (If so, who introduces this definition?)

When you say rationally is "what gets you good performance", that seems like it could lead to arbitrary circular reasoning about what is and isn't rational. If I exaggerate this concern and define rationality as "what gets you the best life possible", that's not a helpful definition because it leads to the unfalsifiable claim that rationality is optimal while providing no practical insight.

I've seen EA writing (particularly about AI safety) that goes something like:
I know X and Y thought leaders in AI safety, they're exceptionally smart people with opinion A, so even though I personally think opinion B is more defensible, I also think I should be updating my natural independent opinion in the direction of A, because they're way smarter and more knowledgeable than me.

I'm struggling to see how this update strategy makes sense. It seems to have merit when X and Y know/understand things that literally no other expert knows, but aside from that, in all other scenarios that come to mind, it seems neutral at best, otherwise a worse strategy than totally disregarding the "thought leader status" of X and Y.

Am I missing something?

Load more