Anonymous feedback form: https://www.admonymous.co/kuhanj
How long does the happiness continue when you're not meditating? A range of times would be helpful
Initially the afterglow would last 30 minutes to a few hours. Over time it's gotten closer to a default state unless various stressors (usually work-related) build up and I don't spend enough time processing them. I've been trading off higher mindfulness to get more work done and am not sure if I'm making the right trade-offs, but I expect it'll become clearer over time as I get more data on how my productivity varies with my mindfulness level.
How long does it take you to get into the state each time?
When my mindfulness levels are high it can be almost instantaneous and persist outside of meditation. When it's not, I can still usually get to a fairly strong jhana within 30 minutes.
How many hours of meditation did you have to do before you could reliably achieve the state?
In my case maybe 5-8 hours of meditation on retreat before the earlier jhanas felt easy to straightforwardly access? I did get lucky experiencing a jhana quite early on during my retreat. I also found cold showers and listening to my favorite music pre-meditation made getting into a jhana much faster.
ATM I think 90-95%?
Fair and understandable criticisms. Some quick responses:
1) I've attempted to share resources and pointers that I hope can get people similar benefits for free without signing up for a retreat (like Rob Burbea's retreat videos, Nadia Asparouhova's write-up with meditation instructions, and other content). Since I found most of these after my Jhourney retreat I can't speak from experience about their effectiveness. I'd be excited for more people to experiment and share what does and doesn't work for them, and for people with more experience to share what's worked for them (on the meditation front, emotion processing, and more). I also don't intend to suggest that Jhourney has access to insights that are only discoverable by doing one of their retreats. They do seem to be taking the prospect that jhanas can be accessed quickly much more seriously than many others, and have encouraging results.
2) As I mentioned, my experience appears to have been somewhat of an outlier, and I don't have a great understanding as to why. Insofar as whatever worked for me can help others, I aim to share. That said, Twitter discourse about jhanas and Jhourney seems to match my impression, other unaffiliated people have discussed Jhourney retreats seeming to generate many outlier positive experiences.
3) It doesn't surprise me at all that there's low-hanging fruit on the mindfulness front. Buddhist texts are very poorly (anti-helpfully) translated. There has not been that much serious exploration, optimization pressure, and investment into improving and democratizing mindfulness education and wellbeing. This extends beyond mindfulness. Why did it take as long as it did for GLP-1 medications to become widespread? Many self-help interventions are incentivized against actually fixing people's problems (e.g. therapists stop getting paid if they permanently fix your problems). There are other orgs that seem to generate very positive experiences working in related areas, like Art of Accomplishment content and courses for processing emotions, making better decisions, and better connecting with others.
4) I don't know Jhourney's team well and don't want to speak on their behalf (but I do think they're well-intentioned). I've found their official and staff Twitter accounts share the most relevant instructions they provide on retreat - e.g. they publicly discuss cultivating positivity likely being more effective for accessing jhanas, forgiveness meditation (which I'm realizing I should add to the main post) and guided recordings, and many other insights.
My impression is that expected donations/fees for week-long meditation retreats is often in the $1000+ range (though granted this is for in-person retreats, and I haven't explored this in detail). We did have daily personalized instruction, and staff were available on-call throughout our retreat. Given how quickly Jhourney's retreats sell out, from a profit-maximizing perspective it seems like they could be charging more. I also don't know what they do with their profits. I wouldn't be surprised if they donated a decent amount, or spent it in ways they think make sense on altruistic grounds. They say in their blog post about their plans that they aspire to change the lives of tens of millions with the following steps:
- Build a school to demonstrate that it's possible to transform wellbeing with meditation
- Invest the money and attention from the school into technology to accelerate that process
- Deliver superwellbeing more quickly and reliably
Thanks! You can fill out this form to get notified about future retreats. Their in-person retreats might well be worth doing as well if you're able to, and generate similar results according to their survey. They're more expensive and require taking more time off work. But given their track record I wouldn't be surprised if it was worth the money and time. I have a friend who has done an in-person and online retreat with them and preferred the in-person one.
That said, I have a hard time imagining my experience being as positive doing the retreat in person, largely because I got a lot of value out of feeling comfortable expressing my emotions however felt natural (and crying in particular). I would not have felt comfortable potentially disrupting others while meditating in the same room.
And strong +1 to trying things. I wish I had read Romeo Steven's meditation FAQ (and the rest of his blog) years ago, and this excerpt in particular.
There needs to be some sort of guiding principle on when to keep going and when to try something different. The answer, from surveys and measurements taken during longer term practice intensives, seems to be about 30 hours of practice. If a practice hasn't shown some sort of tangible, legible benefit in your thinking process, emotional stability, or skillful behavior in the world it very very likely isn't the practice for you right now. This doesn't mean it is a bad practice or that others might not derive great benefit from it. This also doesn't mean it might not be useful to you in the future. But it isn't the practice for you right now. Granted, there are exceptions to every rule, and some people get something out of gritting their teeth and sticking with a practice for a long time. But I strongly suspect they could have had an easier time trying other things. 30 hours might sound like a long time, but its just a month of practice at one hour per day. This caps how much of a time waste any given technique is. In the beginning it is very likely that you can get away with less: two weeks of practice time should show some results. If you try lots of things for two weeks each and nothing works you may need to resort to the longer standard of 30 hours.
Jhourney recommends approaching meditation like a scientist outside of sessions (e.g. considering experiments and variables to isolate), but with child-like playfulness while meditating. I've found that approach quite helpful. It led to an impromptu experiment to listen to music to amplify positive emotions while meditating, which IIRC preceded my first jhana of the retreat.
Conditioned on human extinction, do you expect intelligent life to re-evolve with levels of autonomy similar to what humanity has now (which seems quite important for assessing how bad human extinction would be on longtermist grounds)? I don't think it's likely.
Maybe the underlying crux (if your intuition differs) is what proportion of human extinction scenarios (not including non-extinction x-risk) involve intelligent/agentic AIs, and/or other conditions which would significantly limit the potential of new intelligent life even if it did re-emerge. My current low-resilience impression is probably 90+%.
And the above considerations and credences make how good the next intelligent species are vs. humans fairly inconsequential.
Thanks for the feedback, and I’m sorry for causing that unintended (but foreseeable) reaction. I edited the wording of the original take to address your feedback. My intention for writing this was to encourage others to figure things out independently, share our thinking, and listen to our guts - especially when we disagree with the aforementioned sources of deference about how to do the most good.
I think EAs have done a surprisingly good job at identifying crucial insights, and acting accordingly. EAs also seem unusually willing to explicitly acknowledge opportunity cost and trade-offs (which I often find the rest of the world frustratingly unwilling to do). These are definitely worth celebrating.
However, I think our track record at translating the above into actually improving the future is nowhere near our potential.
Since I experience a lot of guilt about not being a good enough person, the EA community has provided a lot of much-needed comfort to handle the daunting challenge of doing as much good as I can. It’s been scary to confront the possibility that the “adults in charge” don’t have the important things figured out about how to do the most good. Given how the last few years have unfolded, they don’t even seem to be doing a particularly good job. Of course, this is very understandable. FTX trauma is intense, and the world is incredibly complicated. I don’t think I’m doing a particularly good job either.
But it has been liberating to allow myself to actually think, trust my gut, and not rely on the EA community/funders/orgs to assess how much impact I’m having relative to my potential. I expect that with more independent thinking, ambition, and courage, our community will do much better at realizing our potential moving forward.
Thanks for your comment, and I understand your frustration. I’m still figuring out how to communicate about specifics around why I feel strongly that incorrectly applying the neglectedness heuristic as a shortcut to avoid investigating whether investment in an area is warranted has led to tons of lost potential impact. And yes, US politics are, in my opinion, a central example. But I also think there are tons of others I’m not aware of, which brings me to the broader (meta) point I wanted to emphasize in the above take.
I wanted to focus on the case for more independent thinking, and discussion of how little cross-cause prioritization work there seems to be in EA world, rather than trying to convince the community of my current beliefs. I did initially include object-level takes on prioritization (which I may make another quick take about soon) in the comment, but decided to remove them for this reason: to keep the focus on the meta issue.
My guess is that many community members implicitly assume that cross-cause prioritization work is done frequently and rigorously enough to take into account important changes in the world, and that the takeaways get communicated such that EA resources get effectively allocated. I don’t think this is the case. If it is, the takeaways don’t seem to be communicated widely. I don’t know of a longtermist Givewell alternative for donations. I don’t see much rigorous cross-cause prioritization analysis from Open Phil, 80K, or on the EA forum to inform how to most impactfully spend time. Also, the priorities of the community have stayed surprisingly consistent over the years, despite many large changes in AI, politics, and more.
Given how important and difficult it is to do well, I think EAs should feel empowered to regularly contribute to cross-cause prioritization discourse, so we can all understand the world better and make better decisions.
I really appreciated this post, and think there is a ton of room for more impact with more frequent and rigorous cross-cause prioritization work. Your post prompted me to finally write up a related quick take I've been meaning to share for a while (which I'll reproduce below), so thank you!
***
I've been feeling increasingly strongly over the last couple of years that EA organizations and individuals (myself very much included) could be allocating resources and doing prioritization much more effectively. That said, I think we're doing extremely well in relative terms, and greatly appreciate the community's willingness to engage in such difficult prioritization.
Reasons why I think we're not realizing our potential:
I'm collaborating on a research project exploring how to most effectively address concentration of power risks (which I think the community has been neglecting) to improve the LTF/mitigate x-risk, considering implications of AGI and potentially short timelines, and the current political landscape (mostly focused on the US, and to a lesser extent China). We're planning to collate, ideate, and prioritize among concrete interventions to work on and donate to, and compare their effectiveness against other longtermist/x-risk mitigation interventions. I'd be excited to collaborate with others interested in getting more clarity on how to best spend time, money, and other resources on longtermist grounds. Reach out (e.g. by EA Forum DM) if you're interested. :)
I would also love to see more individuals and orgs conduct, fund, and share more cross-cause prioritization analyses (especially in areas under-explored by the community) with discretion about when to share publicly vs. privately.
I've been feeling increasingly strongly over the last couple of years that EA organizations and individuals (myself very much included) could be allocating resources and doing prioritization much more effectively. (That said, I think we're doing extremely well in relative terms, and greatly appreciate the community's willingness to engage in such difficult prioritization.)
Reasons why I think we're not realizing our potential:
I'm collaborating on a research project exploring how to most effectively address concentration of power risks (which I think the community has been neglecting) to improve the LTF/mitigate x-risk, considering implications of AGI and potentially short timelines, and the current political landscape (mostly focused on the US, and to a lesser extent China). We're planning to collate, ideate, and prioritize among concrete interventions to work on and donate to, and compare their effectiveness against other longtermist/x-risk mitigation interventions. I'd be excited to collaborate with others interested in getting more clarity on how to best spend time, money, and other resources on longtermist grounds. Reach out (e.g. by EA Forum DM) if you're interested. :)
I would also love to see more individuals and orgs conduct, fund, and share more cross-cause prioritization analyses (especially in areas under-explored by the community) with discretion about when to share publicly vs. privately.
Not that I know of! I can ask if they're open to something in this vein.