Bio

Participation
3

I co-started Effectief Geven (Belgian effective giving org), am a volunteer researcher at SatisfIA (AI-safety org) and a volunteer writer at GAIA (Animal welfare org).

If you're interested in philosophy and mechanism design, consider checking out my blog.

I'm a student of moral science at the university of Ghent. I also started and ran EA Ghent from 2020 to 2024, at which point I quit in protest because of the Manifest scandal (and the reactionary trend it highlighted). I now no longer consider myself an EA (but I'm still part of GWWC and EAA, and if the rationalists split off, I'll definitely join again).

Possible conflict of interests: I have never received money from EA, but could plausibly be biased in favor of the organizations I volunteer for.

How others can help me

A paid job

How I can help others

philosophical research, sociological research, graphic design, mechanism design, translation, literature reviews, forecasting (top 20 on metaculus).

Send me a request and I'll probably do it for free.

Sequences
7

Invertebrate Welfare
The Ethics Of Giving
Moral Economics
Consequentialist Cluelessness
The Meta Trap
AI Forecasting Infrastructure
High Time For Drug Policy Reform

Comments
126

Topic contributions
10

I made two visual guides that could be used to improve online discussions. These could be dropped into any conversation to (hopefully) make the discussion more productive.

The first is an update on Grahams hierarchy of disagreement


 

I improved the lay-out of the old image and added a top layer for steelmanning. You can find my reasoning here and a link to the pdf-file of the image here.

The second is a hierarchy of evidence:

I added a bottom layer for personal opinion. You can find the full image and pdf-file here.

Lastly I wanted to share the Toulmin method of argumentation, which is an excellent guide for a general pragmatic approach to arguments

and they seem to be down on socialism, except maybe some non-mainstream market variants.

I did try to find a survey for sociology, political science, and economics, not only today but also when I was writing my post on market socialism (I too wondered whether economists are more in favor of market socialism), but I couldn't really find one. My guess is that the first two would be more pro-socialism and the last more anti, although it probably also differs from country to country depending on their history of academia (e.g. whether they had a red scare in academia or not).

I feel like this is the kind of anti science/empiricism arrogance that philosophers are often accused of

This is probably partly because of the different things they're researching. Economics tends to look at things that are easier to quantify, like GDP and material goods created, which capitalism is really good at, while philosophers tend to look at things that capitalism seems to be less good at, like alienation, which is harder to quantify (though proxies like depression, suicide and loneliness do seem to be increasing).

Not to mention, they might agree on the data but disagree on what to value. Rust & Schwitzgebel (2013) did a survey of philosophy professors specializing in ethics, philosophy professors not specializing in ethics, and non-philosophy professors. 60% of ethicists felt eating meat was wrong, while just under 45% of non-ethicists agreed, and only 19.6% of non-philosophers thought so. I personally think one of the strongest arguments against capitalism is the existence of factory farms. With such numbers, it seems plausible that while an average economist might think of the meat industry as a positive, the average philosopher might think of it as a negative (thinking something akin to this post).

Let me try to steelman this:

We want people to learn new things, so we have conferences where people can present their research. But who to invite? There are so many people, many of whom have never done any studies.
Luckily for us, we have a body of people that spend their lives researching and checking each other's research: Academia. Still, there are many academics, and there's only so many time slots you can assign before you're filled up; ideally, we'd be representative.
So now the question becomes: why was the choice made to spend so many of the limited time slots on "scientific racists", which is a position that's virtually universally rejected by professional researchers, while topics like "socialism", which has a ton of support in academia (e.g., the latest philpapers survey found that when asked about their politics, a majority of philosophers selected "socialism" and only a minority selected "capitalism" or "other"), tend to get little to no time allotted to them at these conferences?

For one thing, I'm not sure if I want to concede the point that it is the "maximally truth-seeking" thing to risk that a community evaporatively cools itself along the lines we're discussing.

Another way to frame it is through the concept of collective intelligence. What is good for developing individual intelligence may not be good for developing collective intelligence.

Think, for example, of schools that pit students against each other and place a heavy emphasis on high-stakes testing to measure individual student performance. This certainly motivates people to personally develop their intellectual skills; just look at how much time, e.g. Chinese children are spending on school. But is this better for the collective intelligence?

High-stakes testing often leads to a curriculum that is narrowly focused on intelligence-focused skills that are easily measurable by tests. This can limit the development of broader, harder-to-measure social skills that are vital for collective intelligence, such as communication, group brainstorming, deescalation, keeping your ego in check, empathy...

And such a testing-focused environment can discourage collaborative learning experiences because the focus is on individual performance. This reduction in group learning opportunities and collaboration limits overall knowledge growth.

It can exacerbate educational inequalities by disproportionately disadvantaging students from lower socio-economic backgrounds, who may have less access to test preparation resources or supportive learning environments. This can lead to a segmented education system where collective intelligence is stifled because not all members have equal opportunities to contribute and develop.

And what about all the work that needs to be done that is not associated with high intelligence? Students who might not excel in what a given culture considers high-intelligence (such as the arts, practical skills, or caretaking work) may feel undervalued and disengage from contributing their unique perspectives. Worse, if they continue to pursue individual intelligence, you might end up with a workforce that has a bad division of labor, despite having people that theoretically could have taken up those niches. Like what's happening in the US:

This is for the US, imagine this but a thousand times worse for China

If you want to have more truth-seeking, you first have to make sure that your society functions. (E.g. if everyone is a college professor, who's making the food?)
To have a collective be maximally truth-seeking in the long run, you have to not solely focus on truth-seeking.

eventually SJP-EA morphs into bog-standard Ford Foundation philanthropy

This seems unlikely to me for several reasons, foremost amongst them that they would lose interest in animal welfare. Do you think that progressives are not truly invested in it, and that it's primarily championed by their skeptics? Because the data seems to indicate the opposite.

I appreciate what Rutger Bregman is trying to do, and his work has certainly had a big positive impact on the world, almost certainly larger than mine at least. But honestly, I think he could be more rigorous. I haven't looked into his 'school for moral ambition' project, but I have read (the first half) of his book "humankind", and despite vehemently agreeing with the conclusion, I would never recommend it to anyone, especially not anyone who has done any research before.

There seems to be some sort of trade-off between wide reach and rigor. I noticed a similar thing with other EA public intellectuals, like for example with Sam Harris and his book "The Moral Landscape" (I haven't read any of his other books, mostly because this one was just so riddled with sloppy errors), and Steven Pinker's "Enlightenment Now" (Haven't read any of his other books either, again because of errors in this book). (Also, I've seen some clips of them online, and while that's not the best way to get information about someone, they didn't raise my opinion of them, to say the least).

Pretty annoying overall. At least Bregman is not prominently displayed on the EA People page like they are (even though what I read of his book was comparatively better). I would delete them off of it, but last time I removed SBF and Musk from it, that edit got downvoted and I had to ask a friend to upvote it (and this was after SBF was detained, so I don't think a Harris or Pinker edit would fare much better). Pretty sad, because I think EA has much better people to display than a lot of individuals on that page. Especially considering some of them (like Harris and Pinker) currently don't even identify as EA.
 

A bit strong, but about right. The strategy the rationalists describe seems to stem from a desire to ensure their own intellectual development, which is, after all, the rationalist project. By disregarding social norms you can start conversing with lots of people about lots of stuff you otherwise wouldn't have been able to. Tempting, however, my own (intellectual) freedom is not my primary concern; my primary concern is the overall happiness (or feelings, if you will) of others, and certain social norms are there to protect that.

Here's one data point; I was consistently in the top 25 on metaculus for a couple years. I would never attend a conference where a "scientific racist" gave a talk.
 

I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing.

It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes.

I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it.

EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again.

EDIT 2: For those wondering what this quick-take is reacting to, here's a good summary by David Thorstad.

Hi Melvin, wonderful work!

Similar to you, I also want to bring about systemic change for animals (see e.g. animal welfare is now enshrined in the Belgian constitution). One problem people like us face is that the EA framework doesn't really gel with it. My group couldn't get any funding from EA, even though we have a decades long track record with things like:

  • Legal prohibition of the sale of dogs and cats in public marketplaces.

  • The closure of several markets where animals suffered routine and abject abuse (due to hidden camera investigations)

  • The prohibition of hunting stray cats in Wallonia and Flanders.

  • The prohibition of keeping wild animals in circuses in Belgium.

  • The decision of all Belgian supermarkets to stop selling eggs from battery hens. Now 90% of all fresh eggs sold in our country come from animal friendly farms (ground system, free range or organic).

  • The European ban on trade in seal products.

  • The Flemish and Walloon ban on slaughter without stunning.
  • 
The ban on fur farming and force feeding in Flanders.

But the impact of changing the constitution is impossible to quantify. With things like medical interventions, we can run an RCT (which the EA framework loves), but the same cannot be done with constitutional changes since we don't have a control country. The problem with RCTs is that they are expensive and measure narrow, direct, continuous effects, while they're unpractical for broad, indirect, or discontinuous effects. Which means those RCT based interventions privilege the status-quo more. Systemic change advocates face an uphill battle in getting EA funding.

But it's more than that; the culture of EA is very anglo-sphere. It's human nature to prefer your ingroup so it's unsurprising that the big EA funders, mostly anglo-sphere entrepreneurs, prefer to give to other anglo-sphere entrepreneurs (plus other demographic data which you can probably guess). The Silicon Valley approach of starting a firm gets you more EA money and attention than lobbying the government for slow systemic change, and it helps a lot if you do it in the anglo-sphere. If you look at all the projects/people EA gives funding/attention to, you'll see that it's dominated by english-speaking/anglo-sphere projects/people to an absurd degree, like, much more than you would expect if you thought EA gave to maximum impact projects/people indiscriminately. (And I'm not even going to talk about race and gender in EA, which really speaks for itself.)
Take for example a look at: the individuals on the EA people page, the people that appear on EA podcasts, the AI people/project funding landscape, the AI projects that get attention, the philosophers that get attention, longtermism people/projects in general, all the people EA made famous, the people who work at EA organizations, the EA survey showing that EAs disproportionately move to the UK/US, individual EA university chapters in the US/UK being so well funded that they can throw regular pizza parties while our entire country can't get a single community organizer despite being the center of EU-legislation, the EA forum having a tag for the US and UK but almost no other countries, the EA forum having a tag for UK policy and US policy but not for other countries... You get the point.

So to get funding, I highly recommend you first get someone who knows a lot of insiders in the EA anglo-sphere, and who can speak/present themselves as one of them (and, realistically, being white and male will probably also help).
Secondly, throw some numbers around. In academia, it's bad form to claim to be able to quantify unquantifiable things, but EA funders want numbers, even if they're made up.
Lastly, don't beat yourself up if you don't get funding, and don't assume you're not effective because the "effective altruists" funders don't consider you to be. Again, we didn't get any funding, and we did change the constitution, while just last week it came to light that EA funders did, e.g. give $100.000 to a video game that never got developed. Just because they call themselves effective doesn't mean they are effective. Like other humans, they're also very biased in favor of their in-group.

Load more