English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.
I am omnivorous in my interests, but from a work perspective, I am very interested in the confluence of new technologies and education. As for other things that could profit from assistance, I am trying to self-teach myself undergraduate level math and to seriously explore and engage with the intellectual and moral foundations of EA.
Reach out to me if you have any questions about Teaching English as a Foreign Language, translation and , generally, anything Humanities-orientated. Also, anything you'd like to know about Spain in general and its northwestern corner, Galicia, in particular.
Just made a search and, rather embarrassingly, I couldn't find an actual long discussion in the forum (memory didn't serve as well as I had thought). I think I conflated the 2 comments of Ian Turner and Jason on this topic (in the forum post The ugly sides of two approaches to charity by Julia Wise from January 13th 2025) with EA-focused criticisms of MacKenzie Scott's donations from this reddit thread, starting from PEEFsmash's post:
This is bad and neoliberals should be economically literate enough to know why.
Cost effectiveness of interventions + room for additional funding. Scott is completely disregarding both concepts and giving money to whatever sounds good. Mostly trendy social topics. There will eventually be a book written on Scott's philanthropy and it will probably have accomplished nothing at all. Would be better off as capital for Bezos to have allocated privately with the intention of profit (Amazon and other Bezos projects will do more total good for humanity than all of Scott's cockamamey donations do) and the money would certainly do more good via the Gates/Effective Altruism style of hyper targeting the most cost effective causes, giving to each only as much as they can each deploy, and funding research into high financial risk but high expected return research, both for profit and not-for-profit.
A very enlightening book on these principles: https://smile.amazon.com/Doing-Good-Better-Effective-Altruism/dp/1592409660/ref=mp_s_a_1_1?crid=3VU1ZFHVZ0ICE&keywords=doing+good+better&qid=1638468006&sprefix=doing+goo%2Caps%2C93&sr=8-1
I mean... if I were a conservative billionaire, I would be extremely wary about misuse and subversion of the principles that started some foundations (Mellon is the most egregious, but also Rockefeller, Ford, MacArthur...) and a few months ago we had in this very forum a discussion, if memory serves, of the terrible philanthropic choices of MacKenzie Scott. While I obviously think it is desirable for billionaires to spend money on effective charity giving, I also feel there's a reasonable case to be made for that money that would conventionally be routed into philanthropy to do more good if directed toward innovation: sometimes through philanthropy to individuals and early projects, sometimes through investment in companies capable of creating major breakthroughs.
Really liked this post, and as an oldie myself (by which I mean in my 40s, which feels like quite old compared to the average EA or EA-Adjacent), I resonated a lot with it. In my case, I am not an 'old hand EA' though: I rather arrived relatively circuitously and recently (about 3 years ago) to it.
Some have commented, here or elsewhere, that the fact that EA puts so much emphasis on the effectiveness means that it generally doesn't care much about either community building, general recruitment/retention and group satisfaction, and when it half-heartedly tries to engage in this, it is with a utilitarian logic that doesn't seem congenial to the task. Once could make a good case, though, that this isn't a bug, but a feature: EA as resources-optimizer with little time to waste, given the importance of the issues it tries to solve or ameliorate on dealing with a less active, talented and effective series of people and needs. Once senses an elitist streak inevitably tied to its moral seriousness and focus on results.
On the other hand, I feel communities tend to thrive when they manage to become hospitable and nice places where people are happy to be in, in different degrees. This is what most successful movements -and religions- manage successfully: come for the values, stay for the group.
Passion and intellectual engagement also help a lot, but these perhaps vary a lot in a way that isn't tractable. Like the OP, I find much of the forum posts dull and uninteresting, but then again, the type of person I am, my priorities, values and interests mean I am probably badly fitted to become anything more than mildly EA-Adjacent, so I don't think I'd be a good benchmark in this regard. I think Will's recent post on EA in the age of AGI does hit the nail on the head in many respects, with interesting ideas for revitalizing and updating EA, its actions and its goals. EA might never match religion’s or some group's capacity for lifelong belonging, but recognizing that limitation, and trying to soften its edges, could make it more resilient.
I really loved this post, both probably because I agree with the core of the thesis (even if I am an atheist) as I've understood it and because I like the style (not a very EA one, but then again my own background is mostly in the Humanities). I think it's spot-on on the recommendations and on the critical appraisal on what is effective to move most people who are not in the subset of young, highly numerical/logical and ambitious nerds who I'd guess are the core audience of EA. Then again, there's an elitistic streak within EA that might say that the value of the movement is precisely in attracting and focusing on that kind of people.
I found this insightful. I find both communities interesting and overlapping, but I can also perceive the conflicts at the seams, but they seem pretty minor from an outsider's pov. Personally, I feel I share more beliefs and priors with Rationalism when all is said and done, but I seem them mostly converging.
It was my lame attempt at making a verb out of the Petersburg Paradox, where a calculation of Expected Value of the type I play a coin-tossing game where if I get heads, the pot doubles, if I had tails, I lose everything. The EV is infinite, but in real life, you'll end up ruined pretty quick. SBF had a talk about this with Tyler Cowen and clearly enjoyed biting the bullet:
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
I am rather assuming SBF was a radical, no holds barred, naive Utilitarian who just thought he was smart enough to not get caught with (from his pov) minor infringement of arbitrary rules and norms of the masses and that the risk was just worth it.
I don't really know, but my starting model would be... unless AGI is applying utilitarian models, it would likely rate human welfare well above and beyond animal one, in enough orders of magnitude as to make any animal welfare insignificant. The developments could allow for an end of farmed meat and the like, but that would also make the need to have animals as such... mostly redundant? You might have reservations for animals... Dunno.