I'm a little disheartened at all the downvotes on my last post. I believe an EA public figure used scientifically incorrect language in his public arguments for x-risk, and I put quite a bit of work into explaining why in a good faith and scientifically sourced manner. I'm particularly annoyed that a commenter (with relevant expertise!) was at one point heavily downvoted just for agreeing with me (I saw him at -16 at one point) . Fact checking should take precedence over fandoms.
The world of Zakat is really infuriating/frustrating. There is almost NO accountability/transparency demonstrated by orgs which collect and distribute zakat - they don't seem to feel any obligation to show what they do with what they collect. Correspondingly, nearly every Muslim I've spoken to about zakat/effective zakat has expressed that their number 1 gripe with zakat is the strong suspicion that it's being pocketed or corruptly used by these collection orgs.
Given this, it seems like there's a really big niche in the market to be exploited by an EA-aligned zakat org. My feeling at the moment is that the org should focus on, and emphasise, its ability to be highly accountable and transparent about how it stores and distributes the zakat it collects.
The trick here is finding ways to distribute zakat to eligible recipients in cost-effective ways. Currently, possibly only two of the several dozen 'most effective' charities we endorse as a community would be likely zakat-compliant (New Incentives, and Give Directly), and even then, only one or two of GiveDirectly's programs would qualify.
This is pretty disappointing, because it means that the EA community would probably have to spend quite a lot of money either identifying new highly effective charities which are zakat-compliant, or start new highly-effective zakat complaint orgs from scratch.
Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.
A couple of weeks ago I blocked all mentions of "Effective Altruism", "AI Safety", "OpenAI", etc from my twitter feed. Since then I've noticed it become much less of a time sink, and much better for mental health. Would strongly recommend!
Idea for free (feel free to use, abuse, steal): a tool to automatize donations + birthday messages. Imagine a tool that captures your contacts and their corresponding birthdays from Facebook; then, you will make (or schedule) one (or more) donations to a number of charities, and the tool will customize birthday messages with a card mentioning that you donated $ in their honor and send it on their corresponding birthdays.
For instance: imagine you use this tool today; it’ll then map all the birthdays of your acquaintances for the next year. Then you’ll select donating, e.g., $1000 to AMF, and 20 friends or relatives you like; the tool will write 20 draft messages (you can select different templates the tool will suggest you… there’s probably someone already doing this with ChatGPT), one for each of them, including a card certifying that you donated $50 to AMF in honor of their birthday, and send the message on the corresponding date (the tool could let you revise it one day before it). There should be some options to customize messages and charities (I think it might be important that you choose a charity that the other person would identify with a little bit - maybe Every.org would be more interested in it than GWWC). So you’ll save a lot of time writing nice birthday messages for those you like. And, if you only select effective charities, you could deduce that amount from your pledge.
Is there anything like that already?
There is still plenty of time to vote in the Donation Election. The group donation pot currently stands at around $30,000. You can nudge that towards the projects you think are most worthwhile (plus, the voting system is fun and might teach you something about your preferences).
Also- you should donate to the Donation Election fund if:
a) You want to encourage thinking about effective donations on the Forum.
b) You want to commit to donating in line with the Forum's preferences.
c) You'd like me to draw you one of these bad animals (or earn one of our other rewards):
NB: I can also draw these animals holding objects of your choice. Or wearing clothes. Anything is possible.
One of the canonical EA books (can't remember which) suggests that if an individual stops consuming eggs (for example), almost all the time this will have zero impact, but there's some small probability that on some occasion it will have a significant impact. And that can make it worthwhile.
I found this reasonable at the time, but I'm now inclined to think that it's a poor generalization where the expected impact still remains negligible in most scenarios. The main influence for my shift is when I think about how decisions are made within organizations, and how power-seeking approaches are vastly superior to voting in most areas of life where the system exceeds a threshold of complexity.
Anyone care to propose updates on this topic?
(not well thought-out musings. I've only spent a few minutes thinking about this.)
In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn't want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven't we seen any indications of it?
On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don't see signs of extraterrestrial life because civilizations tend to create unaligned AI, which destroys them. But I suspect that the AI-relevant variation would actually be something more like this:
> We claim that a superintelligent AI is going to be a reality soon (maybe between 5 years and 80 years from now), and in general is a benchmark that any civilization would reach eventually. But if superintelligent AI is a thing that civilizations tend to make, why aren't we seeing any indications of that in the broader universe? If some extraterrestrial civilization made an aligned AI, wouldn't we see the results of that in a variety of ways? If some extraterrestrial civilization made an unaligned AI, wouldn't we see the results of that in a variety of ways?
Like many things, I suppose the details matter immensely. Depending on the morality of the creators, an aligned AI might reach spend resources expanding civilization throughout the galaxy, or it might happily putt along maintaining a globe's agricultural system. Depending on how an unaligned AI is unaligned, it might be focused on turning the whole universe into paperclips, or it might simply kill its creators to prevent them from enduring suffering. So on a very simplistic level it seems that the claim of "civilizations tend to make AI eventually, and it really is a superintelligent and world-changing technology" is consistent with reality of "we don't observe any signs of extraterrestrial intelligence."
Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023).
This post recaps a survey about EA 'meta' topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year's Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy's Global Catastrophic Risk Capacity Building team. (The event has previously gone by the name 'Leaders Forum.')
This post received less attention than I thought it would, so I'm bumping it here to make it a bit more well-known that this survey summary exists. All feedback is welcome!
This December is the last month unlimited Manifold Markets currency redemptions for donations are assured: https://manifoldmarkets.notion.site/The-New-Deal-for-Manifold-s-Charity-Program-1527421b89224370a30dc1c7820c23ec
Highly recommend redeeming donations this month since there are orders of magnitude more currency outstanding than can be donated in future months
I thought this recent study in JAMA Open on vegan nutrition was worth a quick take due to its clever and legible study design:
This was an identical twin study in which one twin went vegan for eight weeks, and the other didn't. Nice results on some cardiometabolic lab values (e.g., LDL-C) even though the non-vegan twin was also upping their game nutritionally. I don't think the fact that vegan diets generally improve cardiometabolic health is exactly fresh news, but I find the study design to be unusually legible for nutritional research.
The following table is from Scott Alexander's post, which you should check out for the sources and (many, many) caveats.
> This table can’t tell you what your ethical duties are. I'm concerned it will make some people feel like whatever they do is just a drop in the bucket - all you have to do is spend 11,000 hours without air conditioning, and you'll have saved the same amount of carbon an F-35 burns on one airstrike! But I think the most important thing it could convince you of is that if you were previously planning on letting yourself be miserable to save carbon, you should buy carbon offsets instead. Instead of boiling yourself alive all summer, spend between $0.04 and $2.50 an hour to offset your air conditioning use.
Millions of people contract pork tapeworm infections annually, which causes ~30% of the ~50 million global active epilepsy cases:
Perhaps cultural pork consumption restrictions are onto something:
Does anyone have a resource that maps out different types/subtypes of AI interpretability work?
E.g. mechanistic interpretability and concept-based interpretability, what other types are there and how are they categorised?
If I was a Bay Area VC, and I had $5m to invest annually and $100k to donate to people researching the long-term future (e.g. because it's interesting and I like the idea of being the one to drive the research), it would be foolish to spend some of the $5m investing in people researching nanofactories.
But it would also be foolish to donate some of the $100k to the kinds of people who say "nanorobotics is an obvious scam, they can just make up whatever they want".
And people don't realize that short-term investment and long-term predictions are separate domains that are both valuable in their own way, because there are so few people outside of the near-term focused private sector who are thinking seriously about the future.
They just assume that thinking about the long-term future is just a twisted, failed perversion of the private sector, because of how deeply immersed they are in the private sector's perspective exclusively.
As a result, they never have a chance to notice that the long-term future is something that they and their families might end up living in.