Funding Strategy Week
Marginal Funding Week
Donation Election
Pledge Highlight
Donation Celebration
Nov 12 - 18
Marginal Funding Week
A week for organisations to explain what they would do with marginal funding. Read more.
Dec 23 - 31
Donation Election
A crowd-sourced pot of funds will be distributed amongst three charities based on your votes. Continue donation election conversations here.
$25 598 raised
Dec 16 - 22
Pledge Highlight
A week to post about your experience with pledging, and to discuss the value of pledging. Read more.
Dec 23 - 31
Donation Celebration
When the donation celebration starts, you’ll be able to add a heart to the banner showing that you’ve done your annual donations.
Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!   Best Forum Post I read this year: Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal  It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking the Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.  This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏 Honourable Mentions: * Towards more cooperative AI safety strategies by @richard_ngo: This was a post that I read at exactly the right time for me, as it was a point that I was also highly concerned that the AI Safety field was having a "legitimacy problem".[1] As such, I think Richard's call to action to focus on legitimacy and competence is well made, and I would urge those working explicitly in the field to read it (as well as the comments and discussion on the LessWrong version), and perhaps consider my quick take on the 'vibe shift' in Silicon Valley as a chaser. * On Owning Our
Contra Vasco Grilo on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming? The post above explores how under the utilitarian hedonistic moral framework, the meat-eater problem may result in GiveWell grants or AIM charities to be net-negative. The post seems to argue that one expected value grounds, one should let children die of malaria because they could end up eating chicken, for example. I find this argument morally repugnant and want to highlight it. Using some of the words I have used in a reply: Let me quote William MacAskill comments on "What We Owe the Future" and his reflections on FTX (https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx): Finally, let me say the post itself seems to pit animal welfare against global poverty causes, which I found divisive and probably counterproductive. I downvoted this post because it is not representative of the values I believe EA should strive for. It may have been sufficient to show disagreement, but if someone goes for the first time into the forum and sees the post with many upvotes, their impression will be negative and may not become engaged with the community. If a reporter reads the forum and reads this, they will negatively cover both EA and animal welfare. And if someone was considering taking the 10% pledge or changing their career to support either animal welfare or global health and read this, they will be less likely to do so. I am sorry, but I will strongly oppose "ends justify the means" argument put forward by this post.
Would it be feasible/useful to accelerate the adoption of hornless ("naturally polled") cattle, to remove the need for painful dehorning? There are around 88M farmed cattle in the US at any point in time, and I'm guessing about an OOM more globally. These cattle are for various reasons frequently dehorned -- about 80% of dairy calves and 25% of beef cattle are dehorned annually in the US, meaning roughly 13-14M procedures. Dehorning is often done without anaesthesia or painkillers and is likely extremely painful, both immediately and for some time afterwards. Cattle horns are filled with blood vessels and nerves, so it's not like cutting nails. It might feel something like having your teeth amputated at the root. Some breeds of cows are "naturally polled", meaning they don't grow horns. There have been efforts to develop hornless cattle via selective breeding, and some breeds (e.g., Angus) are entirely hornless. So there is already some incentive to move towards hornless cattle, but probably a weak incentive as dehorning is pretty cheap and infrequent. In cattle, there's a gene that regulates horn growth, with the hornless allele being dominant. So you can gene edit cattle to be naturally hornless. This seems to be an area of active research (e.g.). So now I'm wondering, are there ways of speeding up the adoption of hornless cattle? If all US cattle were hornless, >10M of these painful procedures would be avoided annually. For example, perhaps you could fund relevant gene editing research, advocate to remove regulatory hurdles, or incentivize farmers to adopt hornless cattle breeds? Caveat: I only thought and read about all this for 15 minutes.
Isn't mechinterp basically setting out to build tools for AI self-improvement? One of the things people are most worried about is AIs recursively improving themselves. (Whether all people who claim this kind of thing as a red line will actually treat this as a red line is a separate question for another post.) It seems to me like mechanistic interpretability is basically a really promising avenue for that. Trivial example: Claude decides that the most important thing is being the Golden Gate Bridge. Claude reads up on Anthropic's work, gets access to the relevant tools, and does brain surgery on itself to turn into Golden Gate Bridge Claude. More meaningfully, it seems like any ability to understand in a fine-grained way what's going on in a big model could be co-opted by an AI to "learn" in some way. In general, I think the case that seems most likely soonest is: * Learn in-context (e.g. results of experiments, feedback from users, things like we've recently observed in scheming papers...) * Translate this to appropriate adjustments to weights (identified using mechinterp research) * Execute those adjustments Maybe I'm late to this party and everyone was already conceptualising mechinterp as a very dual-use technology, but I'm here now. Honestly, maybe it leans more towards "offense" (i.e., catastrophic misalignment) than defense! It will almost inevitably require automation to be useful, so we're ceding it to machines out of the gate. I'd expect tomorrow's models to be better placed to make sense of and use of mechinterp techniques than humans are - partly just because of sheer compute, but also maybe (and now I'm into speculating on stuff I understand even less) the nature of their cognition is more suited to what's involved.
Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately: * Influencing the US (federal) government is probably one of the most scalable cost-effective routes for AI safety. * Think tanks are one of the most cost-effective ways to influence the US government. * The prestige of the think tank matters for getting into the room/influencing change. * Rand is among the most prestigious think tank doing AI safety work. * It's also probably the most value-aligned, given Jason Matheny is in charge. * You can earmark donations to the catastrophic risks/emerging risks departments I'll add I have no idea if they need/have asked for marginal funding.