K

kuhanj

2345 karmaJoined

Bio

Working on strengthening democracy and EA community building. Please DM me if you're interested in contributing to the above. Anonymous feedback form: https://www.admonymous.co/kuhanj

Comments
49

Yea, fair point. Maybe this is just reference class tennis, but my impression is that a majority of people who consider themselves EAs aren't significantly prioritizing impact in their career and donation decisions, but I agree that for the subset of EAs who do, that "heroic responsibility"/going overboard can be fraught. 

Some things that come to mind include how often EAs seem to work long hours/on weekends; how willing EAs are to do higher impact work when salaries are lower, when it's less intellectually stimulating, more stressful, etc; how many EAs are willing to donate a large portion of their income; how many EAs think about prioritization and population ethics very rigorously; etc. I'm very appreciative of how much more I see these in EA world than outside it, and I realize the above are unreasonable to expect from people. 

Strong agree. There are many more tractable, effective opportunities than people realize. Unfortunately, many of these can't be discussed publicly. I'm hosting an event at EAG NYC on US democracy preservation Saturday at 4pm, and there will be a social near the venue right after at 5. I'd love for conference attendees to join! Details will be on Swapcard. 

While I really like the HPMOR quote, I don't really resonate with heroic responsibility, and don't resonate with the "Everything is my fault" framing. Responsibility is a helpful social coordination tool, but it doesn't feel very "real" to me. I try to take the most helpful/impactful actions, even if they don't seem like "my responsibility" (while being cooperative and not unilateral and with reasonable constraints). 

I'm sympathetic to taking on heroic responsibility causing harm in certain cases, but I don't see strong enough evidence that it causes more harm than good. The examples of moral courage from my talk all seem like examples of heroic responsibility with positive outcomes. The converse points to your bullets also generally seem more compelling to me:

1) It seems more likely to me that people taking too little responsibility for making the world better off has caused a lot more harm (like billionaires not doing more to reduce poverty, factory farming, climate change, AI risk, etc, or improve the media/disinformation landscape and political environment, etc). The harm is just much less visible since these are mostly failures of omission, not execution errors. It seems obvious to me the world could be much better off today, and the trajectory of the future could look much better than it does right now. 

2) Not really the converse, but I don't know of anyone leaving an impactful role because they can't see how it will solve everything? I've never heard of anyone whose bar for taking on a job is "must be able to solve everything."

3) I see tons of apathy, greed, laziness, inefficiency, etc that lead to worse outcomes. The world is on fire in various ways, but the vast majority of people don't act like it.

4) Overvaluing conventional wisdom also causes tons of harm. How many well-resourced people never question general societal ethical norms (e.g. around the ethics of killing animals for food, or how much to donate, or how much social impact should be a priority in your career compared to salary, etc etc etc).

5) I'd argue EAs (and humans in general) are much more prone to prioritizing higher probability/certainty, lower-EV options over higher-EV, lower-probability options (Givewell donations over pro-global-health USG lobbying or political donations feels like a likely candidate). It's very emotionally difficult to do something that has a low chance of succeeding. AI safety does seem like a strong counterexample in the EA community, but I'd guess a lot of the community's members' prioritization of AI safety and specific work people do has more to do with intellectual interest and it being high-status in the community than rigorous impact-optimization. 

Two cruxes for whether to err more in the direction of doing things the normal way: 1) How well you expect things to go by default. 2) How easy it is to do good vs. cause harm. 

I don't feel great about 1), and honestly feel pretty good about 2), largely because I think that doing common-sense good things tends to actually be good, and doing good galaxy-brained ends-justify-the-means things that seem bad to normal people (like committing fraud or violence or whatever) are usually actually bad. 

Thank you for the kind words Jonas!

Your comment reminded me of another passage from one of my favorite Rob talks, Selflessness and a Life of Love:

"Another thing about the abolitionist movement is that, if you look at the history of it, it actually took sixty or seventy or eighty years to actually make an effect. And some of the people who started it didn’t live to see the fruits of it. So there’s something about this giving myself to benefit others. I will never see them, I will never meet them, I will never get anything from them, whether that’s people or parts of the earth. And having this long view. And somehow it cannot be, in that case, about the limited self. It cannot be, because the limited self is not getting anything out of it. [...] But how might we have this sense of urgency without despair? Meeting the enormity of the suffering in the world with a sense of urgency in the heart, engagement in the heart, but without despair. How can we have, as human beings, a love that keeps going no matter what? And we call that ‘equanimity.’ It’s an aspect of equanimity, that it stays steady no matter what. The love, the compassion stays steady. [...] If we’re, in the practice, cultivating this sense of keeping the mind up and bright, and it’s still open, and it’s still sensitive, and the heart is open and receptive, but the consciousness is buoyant, that means it won’t sink when it meets the suffering in the world. The compassion will be buoyant."

Thanks Will! Our first chat back at Stanford in 2019 about how valuable EA community building and university group organizing are played an important role in me deciding to prioritize it over the following several years, and I'm very grateful I did! Thanks for the fantastic advice. :)

Taking uni organizing really seriously was upstream of MATS, EA Courses/Virtual Programs, and BlueDot (shoutout to Dewi) getting started among other things. IMO this work is extremely valuable and heavily under-prioritized in the community compared to research. Group organizing can be quite helpful for training communications skills, entrepreneurship, agency, grit, improved intuitions about theories of change, management, networking/providing value to other people, general organization/ability to get things done, and many other flexible skills that from personal experience can significantly increase your impact. 

I wrote up some arguments for tractability on my forum post about the tractability of electoral politics here. I also agree with this take about neglectedness being an often unhelpful heuristic for figuring out what's most impactful to work on. People I know who have worked on electoral politics have repeatedly found surprising opportunities for impact. 

Not uncommon, and I'm happy to chat about efforts to change this. (This offer is open to other forum readers too, please feel free to DM me). 

Not that I know of! I can ask if they're open to something in this vein.

How long does the happiness continue when you're not meditating? A range of times would be helpful

Initially the afterglow would last 30 minutes to a few hours. Over time it's gotten closer to a default state unless various stressors (usually work-related) build up and I don't spend enough time processing them. I've been trading off higher mindfulness to get more work done and am not sure if I'm making the right trade-offs, but I expect it'll become clearer over time as I get more data on how my productivity varies with my mindfulness level. 

How long does it take you to get into the state each time?

When my mindfulness levels are high it can be almost instantaneous and persist outside of meditation. When it's not, I can still usually get to a fairly strong jhana within 30 minutes. 

How many hours of meditation did you have to do before you could reliably achieve the state?

In my case maybe 5-8 hours of meditation on retreat before the earlier jhanas felt easy to straightforwardly access? I did get lucky experiencing a jhana quite early on during my retreat. I also found cold showers and listening to my favorite music pre-meditation made getting into a jhana much faster.

  • What percentage of the time when you try to get into the state do you succeed? 

ATM I think 90-95%? 

 

Load more