Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more

POLL: Is it OK to eat honey[1]?

I've appreciated the Honey wars. We've seen the kind of earnest inquiry that makes EA pretty great. 

I'm interested to see where the community stands here. I have so much uncertainty that I'm close to the neutral point, but I've been updated towards it maybe not being OK - I previously slurped the honey without a thought. What do you think[2]?
 

It's OK to eat honey
N
O
K
NN
disagree
agree
  1. ^

    This is a non-specific question. "OK" could mean a number of things (you choose). It could mean you think eating net honey is "net positive" (My pleasure/health > sma

... (read more)
Showing 3 of 4 replies (Click to show all)
2
NickLaing
Nice one! Read it as you will sir :D - perhaps I should have been more specific, but there are trade-offs to specificity on polls like this too.
3
Toby Tremlett🔹
Personally, I think no poll is complete without at least two footnotes. 

Apologies for breaching forum norms. Corrected and please don't ban me.

Recently, various groups successfully lobbied to remove the moratorium on state AI bills. This involved a surprising amount of success while competing against substantial investment from big tech (e.g. Google, Meta, Amazon). I think people interested in mitigating catastrophic risks from advanced AI should consider working at these organizations, at least to the extent their skills/interests are applicable. This both because they could often directly work on substantially helpful things (depending on the role and organization) and because this would yield ... (read more)

A new study in The Lancet estimates that high USAID spending saved over 91 million lives in the past 21 years, and that the cuts will kill 14 million by 2030. They estimate high USAID spending reduced all-cause mortality by 15%, and by 32% in under 5s.

My initial hot-take off the cuff reaction is that it seems borderline implausible that USAID spending have reduced under 5 mortality by 1/3. With so many other factors like Development/Growth, government programs, Medical innovation not funded by USAID (artesunate came on the scene after 2001!), 10x-100x more effective AID like Gates/AMF etc how could this be?

The biggest under 5 effects caused by USAID might be from malaria/ORS programs, but they usually didn't fund the staff giving the medication, so how much credit are they taking for those? They've clai... (read more)

Recently I got curious about the situation of animal farming in China. So I asked the popular AI tools (ChatGPT, Gemini, Perplexity) to do some research on this topic. I have put the result into a NotebookLM note here: https://notebooklm.google.com/notebook/071bb8ac-1745-4965-904a-d0afb9437682

If you have resources that you think I should include, please let me know.

argument about anti-realism just reinforces my view that effective altruism needs to break apart into sub movements that clearly state their goals/ontologies.  (I'm pro ea) but it increasingly doesn't make sense to me to call this "effective altruism" and then be vaguely morally agnostic while mostly just being an applied utilitarian group. Even among the utilitarians there is tons of minutiae that actually significantly alters the value estimates of different things. 

I really do think we could solve most of this stuff by just making EA an umbrel... (read more)

I don't want to limit my interactions to people who agree with me on certain details because I might be wrong about those details and if I'm wrong then I want to be convinced.

12
Toby Tremlett🔹
Meta-ethical views aren't a great way to define practical communities because they needn't affect your first-order moral views.  Ditto ontology most of the time. 

Good news! The 10-year AI moratorium on state legislation has been removed from the budget bill.

The Senate voted 99-1 to strike the provision. Senator Blackburn, who originally supported the moratorium, proposed the amendment to remove it after concluding her compromise exemptions wouldn't work.

https://www.yahoo.com/news/us-senate-strikes-ai-regulation-085758901.html?guccounter=1 

I'm surprised the vote was so close to unanimous!

7
Yadav
Just worth pointing out because it was not obvious to me - the house could add it back, we will still have to wait to see if that happens but seems unlikely. 

Linking this from @Andy Masley's blog:
Consider applying to the Berggruen Prize Essay Competition on the philosophy of consciousness, and donating a portion of your winnings to effective charities

TLDR:

  • The prize is $50,000 
  • The theme is 'consciousness' and the criteria are very vague. Peter Singer won before. 

    More details on the berggruen website here


     

Matching campaigns get a bad rep in EA circles* but it’s totally reasonable for a donor to be concerned that if they put in lots of money into an area other people won’t donate, and matching campaigns preserve the incentive for others to donate, crowding in funding.


* I agree that campaigns claiming you’ll have twice the impact as your donation will be matched are misleading.

Have you read Holden's classic on this topic? It sounds like you are describing what he calls "Influence matching".

4
Jason
It's understandable for a donor to have that concern. However, I submit that this goes both ways -- it's also reasonable for smaller donors to be concerned that the big donors will adjust their own funding levels to account for smaller donations, reducing the big donor's incentives to donate. It's not obvious to me which of these concerns predominates, although my starting assumption is that the big donors are more capable of adjusting than a large number of smaller donors. Much electronic ink has been spilled over the need for more diversification of funding control. Given that, I'd be hesitant to endorse anything that gives even more influence over funding levels to the entities that already have a lot of it. Unless paired with something else, I worry that embracing matching campaigns would worsen the problem of funding influence being too concentrated.
2
calebp
Seems plausible that EA Funds should explore offering matches to larger projects that it wants to fund to help increase the project’s funding diversity.

Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks: 

  1. In a galactic civilisation of thousands of independent and technologically advanced colonies, what is the probability that one of those colonies will create trillions of suffering digital sentient beings? (probably near 100% if digital sentience is possible… it only takes one)
  2. Is it possible to create a gover
... (read more)
Showing 3 of 6 replies (Click to show all)
3
Birk Källberg 🔸
Interesting ideas! I've read your post Interstellar travel will probably doom the long-term future with enthusiasm and have had similar concerns for some years now. Regarding your questions, here are my thoughts: 1. Probability of s-risk I agree that in a sufficiently large space civilization (that isn't controlled by your Governance Structure), the probability of s-risk is almost 100% (but not just from digital minds). Let's unpack this: Our galaxy has roughly 200 billion stars (2*10^11). This means 10^10 viable settleable star systems at least. A dyson swarm around a sun-like start could conservatively support 10^20 biological humans (Today, we are 10^10 and this number was extrapolated from how much sunlight is needed to sustain on human with conventional farming). 80k defines an s-risk as "something causing vastly more suffering than has existed on Earth so far". This could easily be "achieved" even w/o digital minds if just one colony out of the 10^10 decides they want to create lots of wildlife preserves and their dyson swarm consists of mostly those. With around 10^10 more living area as on Earth and as many more wild animals, one year would go buy around this star and the cumulative suffering experienced by all of them would exceed the total suffering from all of Earth's history (with only ~ 1 billion (10^9) years of animal life). This would not necessarily mean that the whole galactic civ was morally net bad. A galaxy with 10,000 hellish star systems, 10 million heavenly systems and a 10 billion rather normal but good systems would still be a pretty awesome future from a total utility standpoint. My point is that s-risk being defined in terms of Earth suffering becomes an increasingly low bar to cross the larger your civilization is. At some point you'd have to have insanely good "quality control" in every corner of your civilzation. This would be analogous to having to ensure that every single one of the 10^10 humans today on earth is happy and never get

Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help). 

The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we pr... (read more)

3
JordanStone
Yeah that's true.  I think 1000 is where I would start to get very worried intuitively, but there would be hundreds of millions of habitable planets in the Milky Way, so theoretically a galactic civilisation could have that many if it didn't kill itself before then.  I guess the probability of one of these civilisations initiating an s-risk or galactic x-risk would just increase with the size of the galactic civilisation. So the more that humanity expands throughout the galaxy, the greater the risk.

On Stepping away from the Forum and "EA"

I'm going to stop posting on the Forum for the foreseeable future[1]. I've learned a lot from reading the Forum as well as participating in it. I hope that other users have learned something from my contributions, even if it's just a sharper understanding of where they're right and I'm wrong! I'm particularly proud of What's in a GWWC Pin? and 5 Historical Case Studies for an EA in Decline.

I'm not deleting the account so if you want to get in touch the best way is probably DM here with an alternative way to stay in c... (read more)

Showing 3 of 6 replies (Click to show all)

Curious about what the critiques you saw that were unresponded to

7
Anthony DiGiovanni
Unfortunately not that "succinct" :) but I argue here that cluelessness-ish arguments defeat the impartial altruistic case for any intervention, longtermist or not. Tl;dr: our estimates of the sign of our net long-term impact are arbitrary. (Building on Mogensen (2021).) (It seems maybe defensible to argue something like: "We can at least non-arbitrarily estimate net near-term effects. Whereas we're clueless about the sign of any particular (non-'gerrymandered') long-term effect (or, there's something qualitatively worse about the reasons for our beliefs about such effects). So we have more reason to do interventions with the best near-term effects." This post gives the strongest case for that I'm aware of. I'm not personally convinced, but think it's worth investigating further.)
5
Mo Putera
The argument I've seen is the opposite, that considering cluelessness favors longtermism instead of undermining it ("therefore consider donating to LTFF", Greaves tentatively suggests).  I am however more sympathetic to Michael's skepticism that it's often hard for me in practice to tell longtermist interventions apart from PlayPump (other than funding d/acc-flavored fieldbuilding maybe), but maybe JWS's reasoning is different.  Also "cluelessness" seems underspecified in forum discussions cf. this discussion thread so I wouldn't be surprised if you and JWS are talking about different things.

Moderation updates

Showing 3 of 82 replies (Click to show all)
Francis
Moderator Comment3
0
0

Update: this user returned to the Forum again under a new account, which we have banned. Bans affect the user, not the account.

4
Toby Tremlett🔹
Update: this user returned to the Forum yesterday to re-post the same piece. I've banned that account as well. Bans affect the user, not the account. 
11
richard_ngo
My point is not that the current EA forum would censor topics that were actually important early EA conversations, because EAs have now been selected for being willing to discuss those topics. My point is that the current forum might censor topics that would be important course-corrections, just as if the rest of society had been moderating early EA conversations, those conversations might have lost important contributions like impartiality between species (controversial: you're saying human lives don't matter very much!), the ineffectiveness of development aid (controversial: you're attacking powerful organizations!), transhumanism (controversial, according to the people who say it's basically eugenics), etc. Re "conversations can be had in more sensitive ways", I mostly disagree, because of the considerations laid out here: the people who are good at discussing topics sensitively are mostly not the ones who are good at coming up with important novel ideas. For example, it seems plausible that genetic engineering for human intelligence enhancement is an important and highly neglected intervention. But you had to be pretty disagreeable to bring it into the public conversation a few years ago (I think it's now a bit more mainstream).
  1. If you have social capital, identify as an EA.
  2. Stop saying Effective Altruism is "weird", "cringe" and full of problems - so often

And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overa... (read more)

Ivan Gayton was formerly mission head at Doctors Without Borders. His interview (60 mins, transcript here) with Elizabeth van Nostrand is full of eye-opening anecdotes, no single one is representative of the whole interview so it's worth listening to / reading it all. Here's one, on the sheer level of poverty and how giving workers higher wages (even if just $1/day vs the local market rate of $0.25/day "for nine hours on the business end of a shovel") distorted the local economy to the point of completely messing up society:

[00:06:07] Ivan: I had a re

... (read more)
Buck
4
0
0
1

This was great, thanks for the link!

13
NickLaing
Yeah this rhymes with everything I've seen. I have a deeply unpopular opinion built on years of experience that NGOs generally pay people way too much. Wrote about it (quite poorly) a whopping 8 years ago! https://ugandapanda.com/2017/04/17/ngos-part-1-pay-your-workers-less/ The thing that makes me doubt my opinion is that I'm yet to find a local Ugandan who publicly agrees with me, and most privately disagree with me too. "More money coming in is better" seems to be the common sense line, despite the inflation (my town Gulu is the most expensive in the country), distorted education system and dragging the best people to less important jobs.  Its not only NGOs, but also means good business ideas can fail because of high salary bills, when they could have worked and grown to employ hundreds/thousands more if the foreigner just paid market rates not 3x.... I think better to just give people money give directly style rather than pay more. It doesn't distort the economy much. It's a really tricky one emotionally and intellectually, and I find it very difficult to manage when I'm the one with the power to pay more

The funny thing working with vitamin deficiencies and malnourishment, you never think it could happen to you? I am autistic, so my diet is bland and always the same... I have scurvy... and vitamin A hypovitaminosis...I literally write papers on issues like this and how we are supposed to fix them. SO MY QUICK TAKE IS "TAKE CARE OF YOUR HEALT FIRST". 

So, I have two possible projects for AI alignment work that I'm debating between focusing on. Am curious for input into how worthwhile they'd be to pursue or follow up on.

The first is a mechanistic interpretability project. I have previously explored things like truth probes by reproducing the Marks and Tegmark paper and extending it to test whether a cosine similarity based linear classifier works as well. It does, but not any better or worse than the difference of means method from that paper. Unlike difference of means, however, it can be extended to mu... (read more)

4
Astelle Kay
Both ideas are compelling in totally different ways! The second one especially stuck with me. There's something powerful about the idea that being reliably “nice” can actually be a strategic move, not just a moral one. It reminds me a lot of how trust builds in human systems too, like how people who treat the vulnerable well tend to gain strong allies over time. Curious to see where you take it next, especially if you explore more complex environments.
2
Joseph_Chu
Thanks for the thoughts! I do think the second one has more potential impact if it works out, but I also worry that it's too "out there" speculative and also dependent on the AGI being persuaded by an argument (which they could just reject), rather than something that more concretely ensures alignment. I also noticed that almost no one is working on the Game Theory angle, so maybe it's neglected, or maybe the smart people all agree it's not going to work. The first project is probably more concrete and actually uses my prior skills as an AI/ML practitioner, but also, there's a lot of people already working on Mech Int stuff. In comparison, my knowledge of Game Theory is self-taught and not very rigorous. I'm tempted to explore both to an extent. The first one I can probably do some exploratory experiments to test the basic idea, and rule it out quickly if it doesn't work.

Of course! You make some great points. I’ve been thinking about that tension too, how alignment via persuasion can feel risky, but might be worth exploring if we can constrain it with better emotional scaffolding.

VSPE (the framework I created) is an attempt to formalize those dynamics without relying entirely on AGI goodwill. I agree it’s not obvious yet if that’s possible, but your comments helped clarify where that boundary might be.

I would love to hear how your own experiments go if you test either idea!

Pete Buttigieg just published a short blogpost called We Are Still Underreacting on AI

He seems to believe that AI will be cause major changes in the next 3-5 years and thinks that AI poses "terrifying challenges," which make me wonder if he is privately sympathetic to the transformative AI hypothesis. If yes, he might also take catastrophic risks from AI quite seriously. While not explicitly mentioned, at the end of his piece, he diplomatically affirms:

The coming policy battles won’t be over whether to be “for” or “against” AI. It is developing swif

... (read more)

I really love the 80 000 hours podcast (Rob Wiblin is one of my favourite pod hosts), but I wish the episodes were shorter. These days I barely manage to get through 1/3rd of the often 3 hour episodes before a new episode comes out, leaving me with a choice between leaving one topic unfinished or not staying up to date with a different topic. I think 1.5 hours is the podcast length sweet spot; I particularly like the format of Spencer Greenberg's Clearer Thinking. I remember Rob Wiblin speaking about episode length at some point, arguing that longer episod... (read more)

Showing 3 of 4 replies (Click to show all)

You don't need to listen to podcasts as soon as they come out :) 

In fact with most media, you can wait a few weeks/months and then decide whether you actually want to read/watch/listen to it, rather than just defaulting to listening to it because it is new and shiny

In fact since you like Rob Wiblin, you can go and listen to old episodes (from another podcast) that he recommends

15
Mo Putera
The essay by Rob analysing this is Our data suggest people keep listening to podcasts even if they’re very long; the relevant part that responds to your tradeoff remark is (emphasis his) That said I do agree with Nick that I wish they tightened up their editing. This seems doable in a way that still gets the benefits Rob mentioned in his essay, like getting to new questions the guest hasn’t been asked before, and the guests easing into the conversation over time as Rob et al build chemistry with them ("I find the best moments on the show are often past the 2h30m mark, when we’re both more likely to be at ease, let our guard down, be authentic and go off script"). I am however wary of using marginal listener acquisition (i.e. listener growth) as the main "steer" for 80K podcast fine-tuning, because of the tyranny of the marginal user, which leads to the enshittification of all once-great products.
4
NickLaing
I agree, for me they are too long and could be tightened up, there's a lot of less-interesting content that could be edited out even. 1.5 hours for me is around the sweetspot of the best longform podcasts (Tim Ferris, Dwarkesh) In saying that few EAs who I've complained to about this disagree and love the length, but I'm skeptical how much of that is coupled with it being the original format of "the" ORIGINAL EA podcast. I wonder how many people who have encountered the podcast in the last couple of years find them too long compared to people who have been into it from the beginning.

Permissive epistemology doesn't imply precise credences / completeness / non-cluelessness

(Many thanks to Jesse Clifton and Sylvester Kollin for discussion.)

My arguments against precise Bayesianism and for cluelessness appeal heavily to the premise “we shouldn’t arbitrarily narrow down our beliefs”. This premise is very compelling to me (and I’d be surprised if it’s not compelling to most others upon reflection, at least if we leave “arbitrary” open to interpretation). I hope to get around to writing more about it eventually.

But suppose you d... (read more)

Has anyone considered the implications of a Reform UK government?

It would be greatly appreciated if someone with the relevant experience or knowledge could share their thoughts on this topic. 

I know this hypothetical issue might not warrant much attention when compared to today's most pressing problems, but with poll after poll suggesting Reform UK will win the next election, it seems as if their potential impact should be analysed. I cannot see any mention of Reform UK on this forum.

Some concerns from their manifesto:

  • Cutting foreign aid by 50%
  • Scrappi
... (read more)

IIUC, polls this far out from an election aren't generally trustworthy, so I don't currently think it's particularly likely they'll win.

2
HugoB
I would also be curious about the effects on AI safety, e.g. any chance the AISI might get DOGEd?

Maybe what humans need more than more advice is advice on how to actually apply advice — that is, better ways to bridge the gap between hearing it and living it?

So not just a list of steps or clever tips, but skills and mindsets for truly absorbing what we read, hear, discuss, and turning that into action. Which I feel might mean shifting from passively waiting for something to "click" to actively digging for what someone is trying to convey and figuring out how it could work for us, just as it worked for them.

Of course, not all advice will fit us, and tha... (read more)

Load more