Cofounded EA Israel, background in math & CS, worked in prioritization research, and moderated on the forum.Â
I'm currently earning to give at a tech company, currently giving everything I don't need to live. I'm currently prioritizing animal welfare, and I'm giving through Animal Welfare Fund. I'm also a board member at EA Israel and at ALTER.
I have struggled a lot with burnout and depression, and I'm still working to shape my life positively.
Downvoted in large part because of what looks like the unfiltered use of LLMs. I really appreciate satiric content, and honestly think that is a good way to criticize or talk about unconventional ideas. The basic idea in this post is simple and punchy and would have been much better presented in a much more concise essay
Re the first point, I agree that the context should be related to a person with an EA philosophy.
Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement.Â
I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.
I guess my main point is that I'd like people in the community to think less about the community should think. Err.. oops..
I think that this question will be better if it is framed not in terms of the EA community. This is becauseÂ
For example, I like Dylan's reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.
Some thoughts:
This is a really cool idea, and the level of execution on the testing and reasoning is spot onđIn particular I think it was a great choice to start experimenting with "plaintext" shared state.
This kind of research can also give some clarity on multi-agent AGI risk scenarios (e.g Distributional AGI: https://www.alphaxiv.org/abs/2512.16856 ), in the sense of coordination between supposedly stateless agents.Â
quickly, because I want to get back to reading - the first link to Anna's post is broken. Should probably be https://www.lesswrong.com/posts/xtuk9wkuSP6H7CcE2/ayn-rand-s-model-of-living-money-and-an-upside-of-burnout