POLL: Is it OK to eat honey[1]?
I've appreciated the Honey wars. We've seen the kind of earnest inquiry that makes EA pretty great.
I'm interested to see where the community stands here. I have so much uncertainty that I'm close to the neutral point, but I've been updated towards it maybe not being OK - I previously slurped the honey without a thought. What do you think[2]?
This is a non-specific question. "OK" could mean a number of things (you choose). It could mean you think eating net honey is "net positive" (My pleasure/health > sma
Recently, various groups successfully lobbied to remove the moratorium on state AI bills. This involved a surprising amount of success while competing against substantial investment from big tech (e.g. Google, Meta, Amazon). I think people interested in mitigating catastrophic risks from advanced AI should consider working at these organizations, at least to the extent their skills/interests are applicable. This both because they could often directly work on substantially helpful things (depending on the role and organization) and because this would yield ...
A new study in The Lancet estimates that high USAID spending saved over 91 million lives in the past 21 years, and that the cuts will kill 14 million by 2030. They estimate high USAID spending reduced all-cause mortality by 15%, and by 32% in under 5s.
My initial hot-take off the cuff reaction is that it seems borderline implausible that USAID spending have reduced under 5 mortality by 1/3. With so many other factors like Development/Growth, government programs, Medical innovation not funded by USAID (artesunate came on the scene after 2001!), 10x-100x more effective AID like Gates/AMF etc how could this be?
The biggest under 5 effects caused by USAID might be from malaria/ORS programs, but they usually didn't fund the staff giving the medication, so how much credit are they taking for those? They've clai...
Recently I got curious about the situation of animal farming in China. So I asked the popular AI tools (ChatGPT, Gemini, Perplexity) to do some research on this topic. I have put the result into a NotebookLM note here: https://notebooklm.google.com/notebook/071bb8ac-1745-4965-904a-d0afb9437682
If you have resources that you think I should include, please let me know.
argument about anti-realism just reinforces my view that effective altruism needs to break apart into sub movements that clearly state their goals/ontologies. (I'm pro ea) but it increasingly doesn't make sense to me to call this "effective altruism" and then be vaguely morally agnostic while mostly just being an applied utilitarian group. Even among the utilitarians there is tons of minutiae that actually significantly alters the value estimates of different things.
I really do think we could solve most of this stuff by just making EA an umbrel...
Good news! The 10-year AI moratorium on state legislation has been removed from the budget bill.
The Senate voted 99-1 to strike the provision. Senator Blackburn, who originally supported the moratorium, proposed the amendment to remove it after concluding her compromise exemptions wouldn't work.
https://www.yahoo.com/news/us-senate-strikes-ai-regulation-085758901.html?guccounter=1
Linking this from @Andy Masley's blog:
Consider applying to the Berggruen Prize Essay Competition on the philosophy of consciousness, and donating a portion of your winnings to effective charities
TLDR:
The theme is 'consciousness' and the criteria are very vague. Peter Singer won before.
More details on the berggruen website here.
Matching campaigns get a bad rep in EA circles* but it’s totally reasonable for a donor to be concerned that if they put in lots of money into an area other people won’t donate, and matching campaigns preserve the incentive for others to donate, crowding in funding.
* I agree that campaigns claiming you’ll have twice the impact as your donation will be matched are misleading.
Have you read Holden's classic on this topic? It sounds like you are describing what he calls "Influence matching".
Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks:
Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help).
The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we pr...
On Stepping away from the Forum and "EA"
I'm going to stop posting on the Forum for the foreseeable future[1]. I've learned a lot from reading the Forum as well as participating in it. I hope that other users have learned something from my contributions, even if it's just a sharper understanding of where they're right and I'm wrong! I'm particularly proud of What's in a GWWC Pin? and 5 Historical Case Studies for an EA in Decline.
I'm not deleting the account so if you want to get in touch the best way is probably DM here with an alternative way to stay in c...
And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overa...
Ivan Gayton was formerly mission head at Doctors Without Borders. His interview (60 mins, transcript here) with Elizabeth van Nostrand is full of eye-opening anecdotes, no single one is representative of the whole interview so it's worth listening to / reading it all. Here's one, on the sheer level of poverty and how giving workers higher wages (even if just $1/day vs the local market rate of $0.25/day "for nine hours on the business end of a shovel") distorted the local economy to the point of completely messing up society:
...[00:06:07] Ivan: I had a re
The funny thing working with vitamin deficiencies and malnourishment, you never think it could happen to you? I am autistic, so my diet is bland and always the same... I have scurvy... and vitamin A hypovitaminosis...I literally write papers on issues like this and how we are supposed to fix them. SO MY QUICK TAKE IS "TAKE CARE OF YOUR HEALT FIRST".
So, I have two possible projects for AI alignment work that I'm debating between focusing on. Am curious for input into how worthwhile they'd be to pursue or follow up on.
The first is a mechanistic interpretability project. I have previously explored things like truth probes by reproducing the Marks and Tegmark paper and extending it to test whether a cosine similarity based linear classifier works as well. It does, but not any better or worse than the difference of means method from that paper. Unlike difference of means, however, it can be extended to mu...
Of course! You make some great points. I’ve been thinking about that tension too, how alignment via persuasion can feel risky, but might be worth exploring if we can constrain it with better emotional scaffolding.
VSPE (the framework I created) is an attempt to formalize those dynamics without relying entirely on AGI goodwill. I agree it’s not obvious yet if that’s possible, but your comments helped clarify where that boundary might be.
I would love to hear how your own experiments go if you test either idea!
Pete Buttigieg just published a short blogpost called We Are Still Underreacting on AI.
He seems to believe that AI will be cause major changes in the next 3-5 years and thinks that AI poses "terrifying challenges," which make me wonder if he is privately sympathetic to the transformative AI hypothesis. If yes, he might also take catastrophic risks from AI quite seriously. While not explicitly mentioned, at the end of his piece, he diplomatically affirms:
...The coming policy battles won’t be over whether to be “for” or “against” AI. It is developing swif
I really love the 80 000 hours podcast (Rob Wiblin is one of my favourite pod hosts), but I wish the episodes were shorter. These days I barely manage to get through 1/3rd of the often 3 hour episodes before a new episode comes out, leaving me with a choice between leaving one topic unfinished or not staying up to date with a different topic. I think 1.5 hours is the podcast length sweet spot; I particularly like the format of Spencer Greenberg's Clearer Thinking. I remember Rob Wiblin speaking about episode length at some point, arguing that longer episod...
You don't need to listen to podcasts as soon as they come out :)
In fact with most media, you can wait a few weeks/months and then decide whether you actually want to read/watch/listen to it, rather than just defaulting to listening to it because it is new and shiny
In fact since you like Rob Wiblin, you can go and listen to old episodes (from another podcast) that he recommends
Permissive epistemology doesn't imply precise credences / completeness / non-cluelessness
(Many thanks to Jesse Clifton and Sylvester Kollin for discussion.)
My arguments against precise Bayesianism and for cluelessness appeal heavily to the premise “we shouldn’t arbitrarily narrow down our beliefs”. This premise is very compelling to me (and I’d be surprised if it’s not compelling to most others upon reflection, at least if we leave “arbitrary” open to interpretation). I hope to get around to writing more about it eventually.
But suppose you d...
Has anyone considered the implications of a Reform UK government?
It would be greatly appreciated if someone with the relevant experience or knowledge could share their thoughts on this topic.
I know this hypothetical issue might not warrant much attention when compared to today's most pressing problems, but with poll after poll suggesting Reform UK will win the next election, it seems as if their potential impact should be analysed. I cannot see any mention of Reform UK on this forum.
Some concerns from their manifesto:
Maybe what humans need more than more advice is advice on how to actually apply advice — that is, better ways to bridge the gap between hearing it and living it?
So not just a list of steps or clever tips, but skills and mindsets for truly absorbing what we read, hear, discuss, and turning that into action. Which I feel might mean shifting from passively waiting for something to "click" to actively digging for what someone is trying to convey and figuring out how it could work for us, just as it worked for them.
Of course, not all advice will fit us, and tha...