Working in healthcare technology.
MSc in applied mathematics/theoretical ML.
Interested in increasing diversity, transparency and democracy in the EA movement. Would like to know how algorithm developers can help "neartermist" causes.
Personally I don't believe in a "trusted person", as a concept. I think EA has had its fun trying to be a high trust environment where some large things are kept private, and it backfired horribly.
I'll take <agree> <disagree> votes to indicate how compelling this would be to readers.
That was the aim of my comment as well, so I do hope more people actually vote on it.
I came to this discussion by following a link from the Animal Welfare Fund's report where it gave out a large grant which isn't publicly disclosed.
Looks like the number is just for 2024, it doesn't really say what the previous numbers were (e.g. before the FTX scandal when most attendees could be reimbursed for flights and accommodation).
Full disclosure: I was rejected from an EAG, in 2022 I think (after attending one the year before).
With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is. See also my explanation from last year.
...some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly...
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
The EA movement absolutely cannot carry on with the "let's allow people to do whatever without any hindrance, what could possibly go wrong?" approach.
Upvoted because I'm glad you answered the question (and didn't use EA grant money for this).
Disagreevoted because as an IMO medalist, I neither think science olympiad medalists are really such a useful audience, nor do I see any value in disseminating said fanfiction to potential alignment researchers.