Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more

There are a number of implicit concepts I have in my head that seem so obvious that I don't even bother verbalizing them. At least, until it's brought to my attention other people don't share these concepts.

It didn't feel like a big revelation at the time I learned the concept, just a formalization of something that's extremely obvious. And yet other people don't have those intuitions, so perhaps this is pretty non-obvious in reality.

Here’s a short, non-exhaustive list:

  • Intermediate Value Theorem
  • Net Present Value
  • Differentiable functions are locally linear
  • Th
... (read more)

Has anyone tried appending "Hire me, and I'll donate 10% of my paycheck to charity" or something similar to their resume or LinkedIn?

I suspect it would just hurt non-EA applications, due to do-gooder derogation and other reasons. But maybe that's just cynicism on my part?

FYI: looking up EAG Bay Area on Google shows the EAG Bay Area 2025 application page first which lists applications as closed. I suspect people may see this and think applications for 2026 are closed. 

You probably want to reach out to the EA Global team about these kinds of things. You can find their contact info on the EAG page here:

Didn't realize my only post of the year was from April 1st. Longforms are just so scary to write other than on April Fool's Day!

Hey y'all,

My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "has an enormously well fund... (read more)

Showing 3 of 13 replies (Click to show all)

I actually share a lot of your read here. I think it is actually a very strong explanation of Singer's argument (the shoes-for-suit swap is a nice touch), and the observation about the motivation for AI safety warrants engagement rather than dismissal. 

My one quibble with the video's content is the "extreme utilitarians" framing; as I'm one of maybe five EA virtue ethicists, I bristle a bit at the implication that EA requires utilitarianism, and in this context it reads as dismissive. It's a pretty minor issue though.

I think that the video is still wo... (read more)

3
Charlie G 🔹
Thanks for the response, and to be honest it's something that I'd agree with too. I've edited my initial comment to better reflect what's actually true. I wouldn't call the EA Global that I've been to an "AI Safety Conference," but if Bay Area is truly different it wouldn't surprise me. "Well-funded" is also subjective, and I think it's likely that I was letting my reflexive defensiveness get in the way of engaging directly. That said, I think the broader point about it exposing a weakness in EA comms and the comments reflecting broad low-trust attitudes towards ideas like EA stand, and I hope people continue to engage with them.
4
Erich_Grunewald 🔸
For what it's worth, I would guess that though the "funness" of AI safety research, or maybe especially technical AI safety research, is probably a factor in determining how many people are interested in working on that, I would be surprised if it's a factor in determining how much money is allocated towards that as a field.

Quick Pitch for Using Toggl

  • Reduces task switching:
    • Actively changing the task in Toggl makes you more aware of switching.
    • Helps maintain focus on one task longer.
    • For small or miscellaneous tasks, I use grouped categories (e.g. "Smalls", "Slack/email") and batch them.
       
  • Tracks time against priorities:
    • Allows reflection on whether your actual time spent aligns with your intended priorities.
    • Easy to spot when too much time is going to low-priority tasks.
       
  • Improves time estimation:
    • Over time, you get calibrated on how long tasks really take.
    • Some tasks consis
... (read more)

100% agree. For those that can't build the habit with toggle, try DoneThat. Same benefits except for the first point!

https://donethat.ai

https://forum.effectivealtruism.org/posts/wt8gKaH9usKy3LQmK/you-should-probably-track-your-time-and-it-just-got-easier 

Man, I was just re-reading the 'why I donate' posts while compiling the Digest. Some really beautiful sentiments in there. 

I grew up in a deeply ironic and uncaring culture (i.e. I went to an all boys school). I didn't like it! It means a lot to me to be in a community now where people can write such heartfelt and authentic posts. 

Some of my favourites:

  • Lorenzo's post, for a great articulation of money considered as a vote: "€1.70 votes to be allocated into a slice of pizza for Lorenzo" or "$5.85 votes for an insecticide-treated bednet for a famil
... (read more)
2
Lorenzo Buonanno🔸
Thank you! Quick note that "money not being yours" is not what I personally believe or wanted to convey in the post. I (sometimes) think my money is my votes, and I want to use them to vote for the things I think are most valuable. I think it was Amrit's great post and others that mentioned things like "I don’t think there’s an especially important sense in which “my” money is mine"

Fair point Lorenzo, that was sloppy phrasing. I'll amend the original. I wasn't sure what to pull out - I honestly just loved the post overall. 

Announcing: 2026 MIRI Technical Governance Team Research Fellowship.

MIRI’s Technical Governance Team plans to run a small research fellowship program in early 2026. The program will run for 8 weeks, and include a $1200/week stipend. Fellows are expected to work on their projects 40 hours per week. The program is remote-by-default, with an in-person kickoff week in Berkeley, CA (flights and housing provided). Participants who already live in or near Berkeley are free to use our office for the duration of the program.

Fellows will spend the first week picking... (read more)

Bella
51
16
15
4
3

EAs are trying to win the "attention arms race" by not playing. I think this could be a mistake.

  • The founding ideas and culture of EA was created and “grew up” in the early 2010s, when online content consumption looked very different.
    • We’ve overall underreacted to shifts in the landscape of where people get ideas and how they engage with them.
    • As a result, we’ve fallen behind, and should consider making a push to bring our messaging and content delivery mechanisms in line with 2020s consumption.
  • Also, EA culture is dispositionally calm, rational, and dry.
    • Th
... (read more)
Showing 3 of 14 replies (Click to show all)

I see the extent of badness of social media platforms (for mental health or society at large) as orthogonal to the question of whether they can be used to cost-effectively attract talent to EA.

2
gergo
Short-form content doesn't have to sacrifice fidelity! It's going to contain less information, but the way to use it is to attract people to go onto engaging with long-form content.
4
gergo
There are ad formats where people don't have to leave the platform, just quickly share their contact information in an in-built form and then continue the mindless scrolling :-) Once they are in a better place mentally, they can read our follow-up email!  There is also a whole "science" behind landing page optimisation, where if people click on your ad, you take them elsewhere but make it as low-friction as possible to sign up to your thing afterwards.

The Ezra Klein Show (one of my favourite podcasts) just released an episode with GiveWell CEO Elie Hassenfeld!

Idea for someone with a bit of free time: 

While I don't have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you pr... (read more)

Started something sorta similar about a month ago: https://saul-munn.notion.site/A-collection-of-content-resources-on-digital-minds-AI-welfare-29f667c7aef380949e4efec04b3637e9?pvs=74

1
Eryn Van Wijk
What a wonderful idea! Mayank referred me over to this post, and I think EA at UIUC might have to hop on this project. I'll see about starting something in the next month or so and sharing a link to where I'm compiling things in case anyone else is interested in collaborating on this. Or, it's possible an initiative like it already exists that I'll stumble upon while investigating (though such a thing may well be outdated).

I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.

Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.... (read more)

Showing 3 of 25 replies (Click to show all)

Following up a bit on this, @parconley. The second post in Zvi's covid-19 series is from 6pm Eastern on March 13, 2020. Let's remember where this is in the timeline. From my quick take above:

On March 8, 2020, Italy put a quarter of its population under lockdown, then put the whole country on lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a nationa

... (read more)
2
Yarrow Bouchard 🔸
I spun this quick take out as a full post here. When I submitted the full post, there was no/almost no engagement on this quick take. In the future, I'll try to make sure to publish things only as a quick take or only as a full post, but not both. This was a fluke under unusual circumstances. Feel free to continue commenting here, cross-post comments from here onto the full post, make new comments on the post, or do whatever you want. Thanks to everyone who engaged and left interesting comments.
4
Jason
I like this comment. This topic is always at risk to devolving into a generalized debate between rationalists and their opponents, creating a lot of heat but not light. So it's helpful to keep a fairly tight focus on potentially action-relevant questions (of which the comment identifies one).

Reading Will's post about the future of EA (here) I think that there is an option also to "hang around and see what happens". It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.

A better earth would build a second suez canal, to ensure that we don't suffer trillions in damage if the first one gets stuck. Likewise, having 2 "think carefully about things movements" seems fine. 

It hasn't always felt like this "two is better than one" feeling... (read more)

Showing 3 of 9 replies (Click to show all)
6
Nathan Young
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest? As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don't fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it's probably too early to judge. 

Well, the evidence is there if you're ever curious. You asked for it, and I gave it.

David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as essays by a few other people well-known in the effective altruism community and the LessWrong community. He has a number of publish... (read more)

5
Nathan Young
I often don't respond to people who write far more than I do.  I may not respond to this. 

Hi, does anyone from the US want to donation-swap with me to a German tax-deductible organization? I want to donate $2410 to the Berkeley Genomics Project via Manifund.

Is there currently an effective altruism merch/apparel store? If not do people think there is demand? I'd be happy to run it or help someone set it up. (quick search shows previous attempts that are now closed - if anyone knows why that would be cool too)

2
Joseph
I'm curious how easy or hard it is to set up some drop shipping. A few items (t-shirts, hoodies, mugs, caps) with a few choices of designs might be feasible, much like the Shrimp Welfare Project Shop, or the DFTBA shop.

it's quite easy, I actually already did it with printful + shopify. I stalled out because (1) I realized it's much more confusing to deal with all the copyright stuff and stepping on toes (I don't want to be competing with ea itself or ea orgs and didn't feel like coordinating with a bunch of people. (2) you kind of get raked using a easy fully automated stack. Not a big deal but with shipping hoodies end up being like 35-40 and t shirts almost 20.  I felt like given the size of EA we should probably just buy a heat press or embroidery machine since w... (read more)

3
James Herbert
Not one I know of. It's on my longterm to-do list for EA Netherlands.
[anonymous]1
0
1

A quick OpenAI-o1 preview BOTEC for additional emissions from a sort of Leopold scenario ~2030, assuming energy is mostly provided by natural gas, since I was kinda curious. Not much time spent on this and took the results at face value. I (of course?) buy that emissions don't matter in short term, in a world where R&D is increasingly automated and scaled.

Phib: Say an additional 20% of US electricity was added to our power usage (e.g. for AI) over the next 6 years, and it was mostly natural gas. Also, that AI inference is used at an increasing rate, sa... (read more)

I live in Australia, and am interested in donating to the fundraising efforts of MIRI and Lightcone Infrastructure, to the tune of $2,000 USD for MIRI and $1,000 USD for Lightcone. Neither of these are tax-advantaged for me. Lightcone is tax advantaged in the US, and MIRI is tax advantaged in a few countries according to their website

Anyone want to make a trade, where I donate the money to a tax-advantaged charity in Australia that you would otherwise donate to, and you make these donations? As I understand it, anything in Effective Altruism Austral... (read more)

This is now covered for Lightcone, but MIRI is still open.

6
Mitchell Laughlin🔸
Can confirm, and happy to vouch. Tax-effective Australian charities and funds: * Against Malaria Foundation * Deworm the World Initiative (led by Evidence Action) * Effective Altruism Australia * GiveDirectly * Giving What We Can * Helen Keller International * Malaria Consortium * New Incentives * One Acre Fund * StrongMinds * Unlimit Health (formerly SCI) * All Grants Fund by GiveWell * Top Charities Fund by GiveWell * Environment Fund by Giving Green

Londoners!
@Gemma 🔸 is hosting a co-writing session this Sunday, for people who would like to write "Why I Donate" posts. The plan is to work in poms, and publish something during the session. 

I can’t join this Sunday (finals season whoo!), but this is a really good idea. I’d love to see more initiatives like this to encourage writing on the Forum—especially during themed weeks.

Also, I’m always down to do (probably remote) co-working sessions with people who want to write Forum posts.

A semi-regular reminder that anybody who wants to join EA (or EA adjacent) online book clubs, I'm your guy.

Copying from a previous post:

I run some online book clubs, some of which are explicitly EA and some of which are EA-adjacent: one on China as it relates to EA, one on professional development for EAs, and one on animal rights/welfare/advocacy. I don't like self-promoting, but I figure I should post this at least once on the EA Forum so that people can find it if they search for "book club" or "reading group." Details, including links for joining each

... (read more)

I came to one, it was great! Thanks Joseph for your tireless organizing. 

In Development, a global development-focused magazine founded by Lauren Gilbert, has just opened their first call for pitches. They are looking for 2-4k word stories about things happening in the developing world. They're especially excited about pitches from people living in low and middle income countries. They pay 2k USD per article, submissions close Jan 12. More info here

Load more