All posts

New & upvoted

Week of Sunday, 20 July 2025
Week of Sun, 20 Jul 2025

Frontpage Posts

Quick takes

Giving What We Can is about to hit 10,000 pledgers. (9935 at the time of writing)

If you're on the fence and wanna be in the 4 digit club, consider very carefully whether you should make the pledge! An important reminder that Earning to Give is a valid way to engage with EA.

Giving What We Can is about to hit 10,000 pledgers. (9935 at the time of writing) If you're on the fence and wanna be in the 4 digit club, consider very carefully whether you should make the pledge! An important reminder that Earning to Give is a valid way to engage with EA.
Linch
19
1
0
3

Fun anecdote from Richard Hamming about checking the calculations used before the Trinity test:

Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you."[7]

From https://en.wikipedia.org/wiki/Richard_Hamming 

19
Linch
5d
0
Fun anecdote from Richard Hamming about checking the calculations used before the Trinity test: From https://en.wikipedia.org/wiki/Richard_Hamming 
https://www.nytimes.com/2025/07/23/health/pepfar-shutdown.html Pepfar maybe still being killed off after all :(

I wrote up something for my personal blog about my relationship with effective altruism. It's intended for a non-EA audience - at this point my blog subscribers are mostly friends and family - so I didn't think it was worth cross posting as I spend a lot of time trying to explain what effective altruism is exactly, but some people might still be interested. My blog mostly is about books and whatnot, not effective altruism, but if I do write some more detailed stuff on effective altruism I will try to post it to the forum also.

I wrote up something for my personal blog about my relationship with effective altruism. It's intended for a non-EA audience - at this point my blog subscribers are mostly friends and family - so I didn't think it was worth cross posting as I spend a lot of time trying to explain what effective altruism is exactly, but some people might still be interested. My blog mostly is about books and whatnot, not effective altruism, but if I do write some more detailed stuff on effective altruism I will try to post it to the forum also.

AI risk in Depth, in the mainstream!

Perhaps the most popular British Podcast, the Rest is Politics has just spent 23 minutes in one of the most compelling and straightforward explanations of AI risk I've heard anywhere, let alone in the mainstream media. The first 5 minutes of the discussion is especially good as an explainer and then there's a more wide ranging discussion after that.

Recommended sharing with non-EA friends, especially in England as this is a respected mainstream podcasts that not many people will find weird - Minute 16 to 38. He also discusses (near the end) his personal journey of how he became scared of AI which is super cool.

I don't love his solution of England and EU building their own "honest" models, but hey most of it is great.

Also a shoutout as well to any of you in the background who might have played a part in helping Rory Stewart think about this more deeply.
 

AI risk in Depth, in the mainstream! Perhaps the most popular British Podcast, the Rest is Politics has just spent 23 minutes in one of the most compelling and straightforward explanations of AI risk I've heard anywhere, let alone in the mainstream media. The first 5 minutes of the discussion is especially good as an explainer and then there's a more wide ranging discussion after that. Recommended sharing with non-EA friends, especially in England as this is a respected mainstream podcasts that not many people will find weird - Minute 16 to 38. He also discusses (near the end) his personal journey of how he became scared of AI which is super cool. I don't love his solution of England and EU building their own "honest" models, but hey most of it is great. Also a shoutout as well to any of you in the background who might have played a part in helping Rory Stewart think about this more deeply.  

Week of Sunday, 13 July 2025
Week of Sun, 13 Jul 2025

Frontpage Posts

44

Quick takes

I am sure someone has mentioned this before, but…

For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing.

I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc.

All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it!

Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)

I am sure someone has mentioned this before, but… For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasn’t significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldn’t add value to the “marketplace of ideas.” In this abstract concept, the “idea is already out there” - so the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have “nothing original to write about” and therefore refrain from writing. I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc. All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it! Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)

AI governance could be much more relevant in the EU, if the EU was willing to regulate ASML. Tell ASML they can only service compliant semiconductor foundries, where a "compliant semicondunctor foundry" is defined as a foundry which only allows its chips to be used by compliant AI companies.

I think this is a really promising path for slower, more responsible AI development globally. The EU is known for its cautious approach to regulation. Many EAs believe that a cautious, risk-averse approach to AI development is appropriate. Yet EU regulations are often viewed as less important, since major AI firms are mostly outside the EU. However, ASML is located in the EU, and serves as a chokepoint for the entire AI industry. Regulating ASML addresses the standard complaint that "AI firms will simply relocate to the most permissive jurisdiction". Advocating this path could be a high-leverage way to make global AI development more responsible without the need for an international treaty.

AI governance could be much more relevant in the EU, if the EU was willing to regulate ASML. Tell ASML they can only service compliant semiconductor foundries, where a "compliant semicondunctor foundry" is defined as a foundry which only allows its chips to be used by compliant AI companies. I think this is a really promising path for slower, more responsible AI development globally. The EU is known for its cautious approach to regulation. Many EAs believe that a cautious, risk-averse approach to AI development is appropriate. Yet EU regulations are often viewed as less important, since major AI firms are mostly outside the EU. However, ASML is located in the EU, and serves as a chokepoint for the entire AI industry. Regulating ASML addresses the standard complaint that "AI firms will simply relocate to the most permissive jurisdiction". Advocating this path could be a high-leverage way to make global AI development more responsible without the need for an international treaty.

If you're considering a career in AI policy, now is an especially good time to start applying widely as there's a lot of hiring going on right now. I documented in my Substack over a dozen different opportunities that I think are very promising.

If you're considering a career in AI policy, now is an especially good time to start applying widely as there's a lot of hiring going on right now. I documented in my Substack over a dozen different opportunities that I think are very promising.

Probably(?) big news on PEPFAR (title: White House agrees to exempt PEPFAR from cuts): https://thehill.com/homenews/senate/5402273-white-house-accepts-pepfar-exemption/. (Credit to Marginal Revolution for bringing this to my attention) 

Probably(?) big news on PEPFAR (title: White House agrees to exempt PEPFAR from cuts): https://thehill.com/homenews/senate/5402273-white-house-accepts-pepfar-exemption/. (Credit to Marginal Revolution for bringing this to my attention) 

Mini EA Forum Update

We've added two new kinds of notifications that have been requested multiple times before:

  1. Notifications when someone links to your post, comment, or quick take
    1. These are turned on by default — you can edit your notifications settings via the Account Settings page.
  2. Keyword alerts
    1. You can manage your keyword alerts here, which you can get to via your Account Settings or by clicking the notification bell and then the three dots icon.
    2. You can quickly add an alert by clicking "Get notified" on the search page. (Note that the alerts only use the keyword, not any search filters.)
    3. You get alerted when the keyword appears in a newly published post, comment, or quick take (so this doesn't include, for example, new topics).
    4. You can also edit the frequency of both the on-site and email versions of these alerts independently via the Account Settings page (at the bottom of the Notifications list).
    5. See more details in the PR

I hope you find these useful! 😊 Feel free to reply if you have any feedback or questions.

Mini EA Forum Update We've added two new kinds of notifications that have been requested multiple times before: 1. Notifications when someone links to your post, comment, or quick take 1. These are turned on by default — you can edit your notifications settings via the Account Settings page. 2. Keyword alerts 1. You can manage your keyword alerts here, which you can get to via your Account Settings or by clicking the notification bell and then the three dots icon. 2. You can quickly add an alert by clicking "Get notified" on the search page. (Note that the alerts only use the keyword, not any search filters.) 3. You get alerted when the keyword appears in a newly published post, comment, or quick take (so this doesn't include, for example, new topics). 4. You can also edit the frequency of both the on-site and email versions of these alerts independently via the Account Settings page (at the bottom of the Notifications list). 5. See more details in the PR I hope you find these useful! 😊 Feel free to reply if you have any feedback or questions.

Week of Sunday, 6 July 2025
Week of Sun, 6 Jul 2025

Frontpage Posts

Quick takes

An excerpt about the creation of PEPFAR, from "Days of Fire" by Peter Baker. I found this moving.

Another major initiative was shaping up around the same time. Since taking office, Bush had developed an interest in fighting AIDS in Africa. He had agreed to contribute to an international fund battling the disease and later started a program aimed at providing drugs to HIV-infected pregnant women to reduce the chances of transmitting the virus to their babies. But it had only whetted his appetite to do more. “When we did it, it revealed how unbelievably pathetic the U.S. effort was,” Michael Gerson said.

So Bush asked Bolten to come up with something more sweeping. Gerson was already thought of as “the custodian of compassionate conservatism within the White House,” as Bolten called him, and he took special interest in AIDS, which had killed his college roommate. Bolten assembled key White House policy aides Gary Edson, Jay Lefkowitz, and Kristen Silverberg in his office. In seeking something transformative, the only outsider they called in was Anthony Fauci, the renowned AIDS researcher and director of the National Institute of Allergy and Infectious Diseases.

“What if money were no object?” Bolten asked. “What would you do?”

Bolten and the others expected him to talk about research for a vaccine because that was what he worked on.

“I’d love to have a few billion more dollars for vaccine research,” Fauci said, “but we’re putting a lot of money into it, and I could not give you any assurance that another single dollar spent on vaccine research is going to get us to a vaccine any faster than we are now.”

Instead, he added, “The thing you can do now is treatment.”

The development of low-cost drugs meant for the first time the world could get a grip on the disease and stop it from being a death sentence for millions of people. “They need the money now,” Fauci said. “They don’t need a vaccine ten years from now.”

The aides crafted a plan in secret, keeping it even from Colin Powell and Tommy Thompson, the secretary of health and human services. They were ready for a final presentation to Bush on December 4. Just before heading into the meeting, Bush stopped by the Roosevelt Room to visit with Jewish leaders in town for the annual White House Hanukkah party later that day. The visitors were supportive of Bush’s confrontation with Iraq and showered him with praise. One of them, George Klein, founder of the Republican Jewish Coalition, recalled that his father had been among the Jewish leaders who tried to get Franklin Roosevelt to do more to stop the Holocaust. “I speak for everyone in this room when I say that if you had been president in the forties, there could have been millions of Jews saved,” the younger Klein said.

Bush choked up at the thought—“You could see his eyes well up,” Klein remembered—and went straight from that meeting to the AIDS meeting, the words ringing in his ears. Lefkowitz, who walked with the president from the Roosevelt Room to the Oval Office, was convinced that sense of moral imperative emboldened Bush as he listened to the arguments about what had shaped up as a $15 billion, five-year program. Daniels and other budget-minded aides “were kind of gasping” about spending so much money, especially with all the costs of the struggle against terrorism and the looming invasion of Iraq. But Bush steered the conversation to aides he knew favored the program, and they argued forcefully for it.

“Gerson, what do you think?” Bush asked.

“If we can do this and we don’t, it will be a source of shame,” Gerson said.

Bush thought so too. So while he mostly wrestled with the coming war, he quietly set in motion one of the most expansive lifesaving programs ever attempted. Somewhere deep inside, the notion of helping the hopeless appealed to a former drinker’s sense of redemption, the belief that nobody was beyond saving.

“Look, this is one of those moments when we can actually change the lives of millions of people, a whole continent,” he told Lefkowitz after the meeting broke up. “How can we not take this step?”

An excerpt about the creation of PEPFAR, from "Days of Fire" by Peter Baker. I found this moving.

There seems to be a pattern where I get excited about some potential projects and ideas during an EA Global, fill EA Global survey suggesting that the conference was extremely useful for me, but then those projects never materialise for various reasons. If others relate, I worry that EA conferences are not as useful as feedback surveys suggest.

50
saulius
19d
2
There seems to be a pattern where I get excited about some potential projects and ideas during an EA Global, fill EA Global survey suggesting that the conference was extremely useful for me, but then those projects never materialise for various reasons. If others relate, I worry that EA conferences are not as useful as feedback surveys suggest.

The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure.

I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles.

1. The harms associated with the origins of our funding

The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from:

  • harms to adolescent mental health,
  • cooperation with authoritarian regimes,
  • and the erosion of democracy, even in the US and Europe.

These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company.

To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics.
But the systems that generated that wealth — and shaped the broader tech landscape could still matter.

Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don't occasionally check your blind spot you might cause damage.

2. Ongoing risk from the same culture

Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk.

Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be completely surprising if such culture to some extent is being replicated across other labs and institutions involved in frontier AI.

3. Wynn-Williams is now focused on AI governance (e.g. risk of nuclear war)

In the final chapters, Wynn-Williams pivots toward global catastrophic risks: AI, great power conflict, and nuclear war.

Her framing is sober, high-context, and uncannily aligned with longtermist priorities. She seems to combine rare access (including relationships with heads of state), strategic clarity, and a grounded moral compass — the kind of person who can get in the room and speak truth to power. People recruiting for senior AI policy roles might want to reach out to her if they have not already.


I’m still not sure what the exact takeaway is. I just have a strong hunch this book matters more than I can currently articulate — and that Wynn-Williams herself may be an unusually valuable ally, mentor, or collaborator for those working on x-risk policy or institutional outreach.

If you’ve read it — or end up reading it — I’d be curious what it sparks for you. It works fantastically as an audiobook, a real page turner with lots of wit and vivid descriptions.

The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure. I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles. 1. The harms associated with the origins of our funding The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from: * harms to adolescent mental health, * cooperation with authoritarian regimes, * and the erosion of democracy, even in the US and Europe. These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company. To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics. But the systems that generated that wealth — and shaped the broader tech landscape could still matter. Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don't occasionally check your blind spot you might cause damage. 2. Ongoing risk from the same culture Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk. Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be comple

GiveWell did their first "lookbacks" (reviews of past grants) to see if they've met initial expectations and what they could learn from them:

Lookbacks compare what we thought would happen before making a grant to what we think happened after at least some of the grant’s activities have been completed and we’ve conducted follow-up research. While we can’t know everything about a grant’s true impact, we can learn a lot by talking to grantees and external stakeholders, reviewing program data, and updating our research. We then create a new cost-effectiveness analysis with this updated information and compare it to our original estimates.

(While I'm very glad they did so with their usual high quality and rigor, I'm also confused why they hadn't started doing this earlier, given that "okay, but did we really help as much as we think we would've? Let's check?" feels like such a basic M&E / ops-y question. I'm obviously missing something trivial here, but also I find it hard to buy "limited org capacity"-type explanations for GW in particular given total funding moved, how long they've worked, their leading role in the grantmaking ecosystem etc)

Their lookbacks led to substantial changes vs original estimates, in New Incentives' case driven by large drops in cost per child enrolled ("we think this is due to economies of scale, efficiency efforts by New Incentives, and the devaluation of the Nigerian naira, but we haven’t prioritized a deep assessment of drivers of cost changes") and in HKI's case driven by vitamin A deficiency rates in Nigeria being lower and counterfactual coverage rates higher than originally estimated:

Bar chart showing the change in expected deaths averted. For New Incentives, the estimate increased from 17,000 to 27,000. For Helen Keller Intl, the estimate decreased from 2,000 to 450.
22
Mo Putera
14d
3
GiveWell did their first "lookbacks" (reviews of past grants) to see if they've met initial expectations and what they could learn from them: (While I'm very glad they did so with their usual high quality and rigor, I'm also confused why they hadn't started doing this earlier, given that "okay, but did we really help as much as we think we would've? Let's check?" feels like such a basic M&E / ops-y question. I'm obviously missing something trivial here, but also I find it hard to buy "limited org capacity"-type explanations for GW in particular given total funding moved, how long they've worked, their leading role in the grantmaking ecosystem etc) Their lookbacks led to substantial changes vs original estimates, in New Incentives' case driven by large drops in cost per child enrolled ("we think this is due to economies of scale, efficiency efforts by New Incentives, and the devaluation of the Nigerian naira, but we haven’t prioritized a deep assessment of drivers of cost changes") and in HKI's case driven by vitamin A deficiency rates in Nigeria being lower and counterfactual coverage rates higher than originally estimated:

I notice a pattern in my conversations where someone is making a career decision: the most helpful parts are often prompted by "what are your strengths and weaknesses?" and "what kinds of work have you historically enjoyed or not enjoyed?"

I can think of a couple cases (one where I was the recipient of career decision advice, another where I was the advice-giver) where we were kinda spinning our wheels, going over the same considerations, and then we brought up those topics >20 minutes into the conversation and immediately made more progress than the rest of the call to that point.

Maybe this is because in EA circles people have already put a ton of thought into considerations like "which of these jobs would be more impactful conditional on me doing a 8/10 job or better in them" and "which of these is generally better for career capital (including skill development, networks, and prestige)," so it's the conversational direction with the most low-hanging fruit. Another frame is that this is another case of people underrating personal fit relative to the more abstract/generally applicable characteristics of a job.

I notice a pattern in my conversations where someone is making a career decision: the most helpful parts are often prompted by "what are your strengths and weaknesses?" and "what kinds of work have you historically enjoyed or not enjoyed?" I can think of a couple cases (one where I was the recipient of career decision advice, another where I was the advice-giver) where we were kinda spinning our wheels, going over the same considerations, and then we brought up those topics >20 minutes into the conversation and immediately made more progress than the rest of the call to that point. Maybe this is because in EA circles people have already put a ton of thought into considerations like "which of these jobs would be more impactful conditional on me doing a 8/10 job or better in them" and "which of these is generally better for career capital (including skill development, networks, and prestige)," so it's the conversational direction with the most low-hanging fruit. Another frame is that this is another case of people underrating personal fit relative to the more abstract/generally applicable characteristics of a job.

Week of Sunday, 29 June 2025
Week of Sun, 29 Jun 2025

Frontpage Posts

Quick takes

Recently, various groups successfully lobbied to remove the moratorium on state AI bills. This involved a surprising amount of success while competing against substantial investment from big tech (e.g. Google, Meta, Amazon). I think people interested in mitigating catastrophic risks from advanced AI should consider working at these organizations, at least to the extent their skills/interests are applicable. This both because they could often directly work on substantially helpful things (depending on the role and organization) and because this would yield valuable work experience and connections.

I worry somewhat that this type of work is neglected due to being less emphasized and seeming lower status. Consider this an attempt to make this type of work higher status.

Pulling organizations mostly from here and here we get a list of orgs you could consider trying to work (specifically on AI policy) at:

To be clear, these organizations vary in the extent to which they are focused on catastrophic risk from AI (from not at all to entirely).

Recently, various groups successfully lobbied to remove the moratorium on state AI bills. This involved a surprising amount of success while competing against substantial investment from big tech (e.g. Google, Meta, Amazon). I think people interested in mitigating catastrophic risks from advanced AI should consider working at these organizations, at least to the extent their skills/interests are applicable. This both because they could often directly work on substantially helpful things (depending on the role and organization) and because this would yield valuable work experience and connections. I worry somewhat that this type of work is neglected due to being less emphasized and seeming lower status. Consider this an attempt to make this type of work higher status. Pulling organizations mostly from here and here we get a list of orgs you could consider trying to work (specifically on AI policy) at: * Encode AI * Americans for Responsible Innovation (ARI) * Fairplay (Fairplay is a kids safety organization which does a variety of advocacy which isn't related to AI. Roles/focuses on AI would be most relevant. In my opinion, working on AI related topics at Fairplay is most applicable for gaining experience and connections.) * Common Sense (Also a kids safety organization) * The AI Policy Network (AIPN) * Secure AI project To be clear, these organizations vary in the extent to which they are focused on catastrophic risk from AI (from not at all to entirely).

Good news! The 10-year AI moratorium on state legislation has been removed from the budget bill.

The Senate voted 99-1 to strike the provision. Senator Blackburn, who originally supported the moratorium, proposed the amendment to remove it after concluding her compromise exemptions wouldn't work.

https://www.yahoo.com/news/us-senate-strikes-ai-regulation-085758901.html?guccounter=1 

49
Yadav
24d
2
Good news! The 10-year AI moratorium on state legislation has been removed from the budget bill. The Senate voted 99-1 to strike the provision. Senator Blackburn, who originally supported the moratorium, proposed the amendment to remove it after concluding her compromise exemptions wouldn't work. https://www.yahoo.com/news/us-senate-strikes-ai-regulation-085758901.html?guccounter=1 

A new study in The Lancet estimates that high USAID spending saved over 91 million lives in the past 21 years, and that the cuts will kill 14 million by 2030. They estimate high USAID spending reduced all-cause mortality by 15%, and by 32% in under 5s.

A new study in The Lancet estimates that high USAID spending saved over 91 million lives in the past 21 years, and that the cuts will kill 14 million by 2030. They estimate high USAID spending reduced all-cause mortality by 15%, and by 32% in under 5s.

POLL: Is it OK to eat honey[1]?

I've appreciated the Honey wars. We've seen the kind of earnest inquiry that makes EA pretty great. 

I'm interested to see where the community stands here. I have so much uncertainty that I'm close to the neutral point, but I've been updated towards it maybe not being OK - I previously slurped the honey without a thought. What do you think[2]?
 

  1. ^

    This is a non-specific question. "OK" could mean a number of things (you choose). It could mean you think eating net honey is "net positive" (My pleasure/health > small chance of bee suffering), or could mean "does no harm at all", or even "Morally acceptable" - which might mean you think it does harm but you can offset it, or that the harm isn't bad enough for you to stop or anything along those lines. 

  2. ^

    @Toby Tremlett🔹 said it was inappropriate for a poll not to have 2 footnotes so here it is...

21
NickLaing
21d
9
POLL: Is it OK to eat honey[1]? I've appreciated the Honey wars. We've seen the kind of earnest inquiry that makes EA pretty great.  I'm interested to see where the community stands here. I have so much uncertainty that I'm close to the neutral point, but I've been updated towards it maybe not being OK - I previously slurped the honey without a thought. What do you think[2]?     1. ^ This is a non-specific question. "OK" could mean a number of things (you choose). It could mean you think eating net honey is "net positive" (My pleasure/health > small chance of bee suffering), or could mean "does no harm at all", or even "Morally acceptable" - which might mean you think it does harm but you can offset it, or that the harm isn't bad enough for you to stop or anything along those lines.  2. ^ @Toby Tremlett🔹 said it was inappropriate for a poll not to have 2 footnotes so here it is...

Recently I got curious about the situation of animal farming in China. So I asked the popular AI tools (ChatGPT, Gemini, Perplexity) to do some research on this topic. I have put the result into a NotebookLM note here: https://notebooklm.google.com/notebook/071bb8ac-1745-4965-904a-d0afb9437682

If you have resources that you think I should include, please let me know.

Recently I got curious about the situation of animal farming in China. So I asked the popular AI tools (ChatGPT, Gemini, Perplexity) to do some research on this topic. I have put the result into a NotebookLM note here: https://notebooklm.google.com/notebook/071bb8ac-1745-4965-904a-d0afb9437682 If you have resources that you think I should include, please let me know.

Load more weeks