Hey! I'm Edo, married + 2 cats, I live in Tel-Aviv, Israel, and I feel weird writing about myself so I go meta.
I'm a mathematician, I love solving problems and helping people. My LinkedIn profile has some more stuff.
I'm a forum moderator, which mostly means that I care about this forum and about you! So let me know if there's anything I can do to help.
I'm currently working full-time at EA Israel, doing independent research and project management. Currently mostly working on evaluating the impact of for-profit tech companies, but I have many projects and this changes rapidly.
I think that this question will be better if it is framed not in terms of the EA community. This is because
For example, I like Dylan's reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.
Some thoughts:
This is a really cool idea, and the level of execution on the testing and reasoning is spot on👌In particular I think it was a great choice to start experimenting with "plaintext" shared state.
This kind of research can also give some clarity on multi-agent AGI risk scenarios (e.g Distributional AGI: https://www.alphaxiv.org/abs/2512.16856 ), in the sense of coordination between supposedly stateless agents.
One use case for the forum is as a curated database of relevant writings, allowing for discussion and discovery, and perhaps useful for AI models. Perhaps it will be good to spam the forum with much more cross-posted content from blogs of relevant people and organizations.
If this is done on old posts, they shouldn't appear on the frontpage, and automatically cross-posting should be possible and simple in current tech.
I'm curious about how representative the EA forum is to the EA community, particularly in regards to worldview. One thing you could try is to take the EA community surveys done by RP, and estimate how a random representative of "the forum" would answer, where the random representative might be a random user or a random comment/post weighted by karma or amount of text
One awesome tool by Welfare Footprint Institute from last year is a Chat-GPT agent made for quantifying animal suffering using their framework
Re the first point, I agree that the context should be related to a person with an EA philosophy.
Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement.
I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.
I guess my main point is that I'd like people in the community to think less about the community should think. Err.. oops..