EdoArad

Doing stuff @ Effective Altruism Israel
5172 karmaJoined Working (6-15 years)Tel Aviv-Yafo, Israel

Bio

Participation
1

Hey! I'm Edo, married + 2 cats, I live in Tel-Aviv, Israel, and I feel weird writing about myself so I go meta.

I'm a mathematician, I love solving problems and helping people. My LinkedIn profile has some more stuff.

I'm a forum moderator, which mostly means that I care about this forum and about you! So let me know if there's anything I can do to help.

I'm currently working full-time at EA Israel, doing independent research and project management. Currently mostly working on evaluating the impact of for-profit tech companies, but I have many projects and this changes rapidly. 

Comments
862

Topic contributions
32

Re the first point, I agree that the context should be related to a person with an EA philosophy.

Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement. 

I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.

I guess my main point is that I'd like people in the community to think less about the community should think. Err.. oops..

I think that this question will be better if it is framed not in terms of the EA community. This is because 

  1. The reasoning about the object level question involving timelines and different intervention strategies is very interesting in itself, and there's no need to add the layer of understanding what the community is doing and how practically it could and should adjust.
  2. Signal boosting a norm of focusing less on intra-movement prioritization and more on personal or marginal additional prioritization and object-level questions. 

For example, I like Dylan's reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.

Some thoughts:

  1. Abolishment vs welfarism. The goals of the movement may diverge in both the long term vision (complete abolishment vs happy farms) and in respect to the medium term (less animals grown for food vs animals grown for food have better living standards).
  2. Difference between species and tradeoffs between them (e.g. increased cost of pig meat can increase consumption of chicken meat).
  3. As others have noted, there are important secondary effects on the movement itself for most interventions. I think it is a very important meta- question about how prioritization should account for it.

This is a really cool idea, and the level of execution on the testing and reasoning is spot on👌In particular I think it was a great choice to start experimenting with "plaintext" shared state.

This kind of research can also give some clarity on multi-agent AGI risk scenarios (e.g Distributional AGI: https://www.alphaxiv.org/abs/2512.16856 ), in the sense of coordination between supposedly stateless agents. 

One use case for the forum is as a curated database of relevant writings, allowing for discussion and discovery, and perhaps useful for AI models. Perhaps it will be good to spam the forum with much more cross-posted content from blogs of relevant people and organizations.

If this is done on old posts, they shouldn't appear on the frontpage, and automatically cross-posting should be possible and simple in current tech. 

I'm curious about how representative the EA forum is to the EA community, particularly in regards to worldview. One thing you could try is to take the EA community surveys done by RP, and estimate how a random representative of "the forum" would answer, where the random representative might be a random user or a random comment/post weighted by karma or amount of text

Answer by EdoArad5
0
0

One awesome tool by Welfare Footprint Institute from last year is a Chat-GPT agent made for quantifying animal suffering using their framework

Alternatively, people find engaging with bugs "yucky" so they prefer having an excuse not to step on a spider :\

Load more