This is a special post for quick takes by Jeroen Willems. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Don't forget to go to today and vote for videos promoting effective charities like Against Malaria Foundation, The Humane League, GiveDirectly, Good Food Institute, ProVeg, GiveWell and Fish Welfare Initiative!

How does one vote? (Sorry if this is super obvious and I'm just missing it!)

+1. I went to the Effective Altruism Barcelona Give Directly video, and the voting link just took me to the givewell homepage. 

In case you're interested in supporting my EA-aligned YouTube channel A Happier World:

I've lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn't reached, you won't get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal.

Manifund fundraising page
EA Forum post announcement

At this point, I'd be willing to buy out credit from anyone who obtains credit on Manifund, applies said credit to this project, and the project doesn't fund. Hopefully Manifund will find a more elegant solution for this kind of issue (there was a discussion on Discord last week) but this should work as a stopgap.

(Offer limited to $240, which is the current funding gap between current offers and the $2500 minimum.)

Today we celebrate Petrov day: The day that Stanislav Petrov potentially saved the world from a nuclear war. 40 years ago now.

I made a quick YouTube Short / TikTok about it:

I'd love to do more weekly coworkings with people! If you're interested in coworking with me, you can book a session here:

We can try it out and then decide if we want to do it weekly or not.

More about me: I run the YouTube channel A Happier World ( so I'll most likely be working on that during our sessions.

An unpolished attempt at moral philosophy

Summary: I propose a view combining classic utilitarianism with a rule that says not to end streams of consciousness. 

Under classic utilitarianism, the only thing that matters is hedonic experiences.
People with a person affecting view object to this, but that view comes with issues of its own. 

To solve the tension between these two philosophies, I propose a view that adds a rule to classical utilitarianism disallowing directly ending streams of consciousness (SOC) 

This is a way to bridge the gap between the person-affecting view and 'personal identity doesn't exist' view and tries to solve some population ethics issues.

I like the simplicity of classic utilitarianism. But I have a strong intuition that a stream of consciousness is valuable intrinsically, meaning that it shouldn't be stopped/destroyed. Creating a new stream of consciousness isn't intrinsically valuable (except for the utility it creates). 

A SOC isn't infinitely valuable. Here are some exceptions:
1. When not ending a SOC would result in more SOCs ending (see trolley problem): basically you want to break the rule as little as possible 
2. The SOC experiences negative utility and there are no signs it will become positive utility (see euthanasia)
3. Ending the SOC will create at least 10x its utility (or a different critical level) 

I believe this is compatible with the non-identity problem (it's still unclear who's you if you're duplicated or if you're 20 years older).
But I've never felt comfortable with the teleportation argument, and this intuition explains why (as a SOC is being ended). 

So generally this means: Making current population happier (or making sure few people die) > increasing amount of people

Future people don't have SOCs as they don't exist yet, but it's still important to make their lives go well.

Say we live in a simulation. If our simulation gets turned off and gets replaced by a different one of equal value (pain/pleasure wise), there still seems to be something of incredible value lost. 

Still, if the simulation gets replaced by a sufficiently more valuable one it could still be good, hence exception number 3. The exception also makes sure you can kill someone to prevent future people never coming into existence (for example: someone is about to spread a virus that makes everyone incapable of reproducing). 

I don't think adding this rule changes the EV calculations regarding increasing pain/pleasure of present and future beings when it doesn't involve ending streams of consciousness (I could be wrong though). 

This rule doesn't solve the repugnant conclusion, but I don't think it's repugnant in the first place. I think my bar for a life worth living seems higher than those of other people. 

How I came to this: I really liked this forum post arguing "Making current population happier > increasing amount of people". But if I agree it means there's something of value besides pure pleasure/pain. This is my attempt at finding what it is. 

One possible major objection: If you're giving birth you're essentially causing a new SOC to be ended (as long as aging isn't solved). Perhaps this is solved by saying you can't directly end a stream of consciousness, but you can ignore second/third order effects (though I'm not sure how to make sense of that).

I'd love to hear your thoughts on these ideas. I don't think these thoughts are good enough or polished enough to deserve a full forum post. I wouldn't be surprised if the first comment under this shortform would completely shatter this idea.

Reason why I call it a "stream of consciousness": Streams change over time. Conscious beings do too. They can also split, multiply or grow bigger.

One thing I worry about though: Does your consciousness end when sleeping? Does it end when under anesthesia? These thoughts frighten me.

EA-aligned video content creators

I made a spreadsheet of all EA-aligned video content creators that I'm aware of. This doesn't mean they make EA content necessarily, just that they share EA values. If I've missed anyone, let me know!

This was a really cool thing to do!

In case you feel like adding another feature, it might be nice to include an example or two of each channel's EA-related content in another column. It's easy to tell how Rational Animations is EA-focused, but I wasn't sure which content I should look at for e.g. the person whose TikTok account was largely focused on juggling.

Like I said, they don't necessarily make EA content. I think I'll add a column specifying whether they do or not.

Responding as per Samuel Shadrach's suggestion:

Neil Halloran seems like a good addition. 

He doesn't seem to be an EA, yet he's rigorously writing on some EA aligned topics.

See here:

I added the transcription of my newest video on sentientism and moral circle expansion to the EA Forum post :)

Curated and popular this week
Relevant opportunities