Cofounded EA Israel, background in math & CS, worked in prioritization research, and moderated on the forum.Â
I'm currently earning to give at a tech company, currently giving everything I don't need to live. I'm currently prioritizing animal welfare, and I'm giving through Animal Welfare Fund. I'm also a board member at EA Israel and at ALTER.
I have struggled a lot with burnout and depression, and I'm still working to shape my life positively.
Downvoted. I felt that the post was making a bunch of assertions in a way that was aimed at persuading rather than explaining. That said, I would really be interested in reading more from you about this topic.Â
I think there is a lot to learn about the nature of consciousness and suffering from buddhist philosophy and practice, and I think that it is worthwhile to investigate how to apply it for AI risk.
In particular, there are some possibly interesting points here that I'd love to see expanded and explained in a way which I'd also feel comfortable engaging with the ideas.
I'm really looking forward to the debate on this topic!Â
Some thoughts:
How do you see the long-term goals and outcomes? I'm particularly interested in how this kind of work interact with abolishment / animal-rights. In particular, the following seem important to me:
(It is hard for me to come up with a version of this argument that I believe. If anything, I believe the opposite - that increased regulation and action for improved welfare will continuously raise the minimal level of acceptable welfare)
Â