Nov 10
Funding strategy week
Nov 17
Marginal funding week
Nov 24
Donation election
Dec 8
Why I donate week
Dec 15
Donation celebration
When you’ve finished making your giving season donations, add a heart to the banner and celebrate with us! Read more.
Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
15
Linch
17h
0
There are a number of implicit concepts I have in my head that seem so obvious that I don't even bother verbalizing them. At least, until it's brought to my attention other people don't share these concepts. It didn't feel like a big revelation at the time I learned the concept, just a formalization of something that's extremely obvious. And yet other people don't have those intuitions, so perhaps this is pretty non-obvious in reality. Here’s a short, non-exhaustive list: * Intermediate Value Theorem * Net Present Value * Differentiable functions are locally linear * Theory of mind * Grice’s maxims If you have not heard any of these ideas before, I highly recommend you look them up! Most *likely*, they will seem obvious to you. You might already know those concepts by a different name, or they’re already integrated enough into your worldview without a definitive name. However, many people appear to lack some of these concepts, and it’s possible you’re one of them. As a test: for every idea in the above list, can you think of a nontrivial real example of a dispute where one or both parties in an intellectual disagreement likely failed to model this concept? If not, you might be missing something about each idea!
Man, I was just re-reading the 'why I donate' posts while compiling the Digest. Some really beautiful sentiments in there.  I grew up in a deeply ironic and uncaring culture (i.e. I went to an all boys school). I didn't like it! It means a lot to me to be in a community now where people can write such heartfelt and authentic posts.  Some of my favourites: * Lorenzo's post, for a great articulation of money considered as a vote: "€1.70 votes to be allocated into a slice of pizza for Lorenzo" or "$5.85 votes for an insecticide-treated bednet for a family in DRC" * Aaron's post: this one made me emotional, which surprised me given the Pokemon-rap context. I think it was the picture Aaron made of the world conspiring to create something he loved. * Bentham's Bulldog: The title alone is wonderful "A Life That Cannot Be A Failure"  
The Ezra Klein Show (one of my favourite podcasts) just released an episode with GiveWell CEO Elie Hassenfeld!
Announcing: 2026 MIRI Technical Governance Team Research Fellowship. MIRI’s Technical Governance Team plans to run a small research fellowship program in early 2026. The program will run for 8 weeks, and include a $1200/week stipend. Fellows are expected to work on their projects 40 hours per week. The program is remote-by-default, with an in-person kickoff week in Berkeley, CA (flights and housing provided). Participants who already live in or near Berkeley are free to use our office for the duration of the program. Fellows will spend the first week picking out scoped projects from a list provided by our team or designing independent research projects (related to our overall agenda), and then spend seven weeks working on that project under the guidance of our Technical Governance Team. One of the main goals of the program is to identify full-time hires for the team. If you are interested in participating, please fill out this application as soon as possible (should take 45-60 minutes). We plan to set dates for participation based on applicant availability, but we expect the fellowship to begin after February 2, 2026 and end before August 31, 2026 (i.e., some 8 week period in spring/summer, 2026). Strong applicants care deeply about existential risk, have existing experience in research or policy work, and are able to work autonomously for long stretches on topics that merge considerations from the technical and political worlds. Unfortunately, we are not able to sponsor visas for this program. See here for examples of potential projects
Didn't realize my only post of the year was from April 1st. Longforms are just so scary to write other than on April Fool's Day!