Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023).
This post recaps a survey about EA 'meta' topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year's Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy's Global Catastrophic Risk Capacity Building team. (The event has previously gon...
Glad you bumped this Michel, I was also surprised by how little attention it received.
You requested feedback, so I hope the below is useful.
High level: we've been working on our strategy for 2024. I was expecting these posts to be very, very helpful for this. However, for some reason, they've only been slightly helpful. Below I've listed a few suggestions for what might have made them more helpful (if this info is contained in the posts and I've missed it I apologise in advance):
Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.
The progressive and/or leftist perspective on LGB and trans people offers the most forthright argument for LGB and trans equality and rights. The liberal and/or centre-left perspective tends to be more milquetoast, more mealy-mouthed, more fence-sitting.
If you voted in the Donation Election, how long did it take you? (What did you spend the most time on?)
I'd be really grateful for quick notes. (You can also private message me if you prefer.)
I think around 5-10 mins? I tried to compare everything I cared at all about, so I only used multipliers between 0 and 2 (otherwise I would have lost track and ended up with intransitive preferences). The comparison stage took the most time. I edited things in the end a little bit, downgrading some charities to 0.
EZ#1
The world of Zakat is really infuriating/frustrating. There is almost NO accountability/transparency demonstrated by orgs which collect and distribute zakat - they don't seem to feel any obligation to show what they do with what they collect. Correspondingly, nearly every Muslim I've spoken to about zakat/effective zakat has expressed that their number 1 gripe with zakat is the strong suspicion that it's being pocketed or corruptly used by these collection orgs.
Given this, it seems like there's a really big niche in the market to be exploited by an EA-...
I don't think helping people who feel an obligation to give zakat do so in the most effective way possible would constitute "endorsing" the awarding of strong preference to members of one's religion as recipients of charity. It merely recognizes that the donor has already made this precommitment, and we want their donation to be as effective as possible given that precommitment.
One of the canonical EA books (can't remember which) suggests that if an individual stops consuming eggs (for example), almost all the time this will have zero impact, but there's some small probability that on some occasion it will have a significant impact. And that can make it worthwhile.
I found this reasonable at the time, but I'm now inclined to think that it's a poor generalization where the expected impact still remains negligible in most scenarios. The main influence for my shift is when I think about how decisions are made within organizations, an...
I agree that the simple story of a producer reacting to changing demand directly is oversimplified. I think we differ in that I think that absent specific information, we should assume that any commonly consumed animal product's supply response to changing demand should be similar to the ones from Compassion, by the Pound. In other words, we should have our prior on impact centered around some of the numbers from there, and update from there. I can explain why I think this in more detail if we disagree on this.
Leather example:
Sure, I chose this...
Does anyone have a resource that maps out different types/subtypes of AI interpretability work?
E.g. mechanistic interpretability and concept-based interpretability, what other types are there and how are they categorised?
I think we should hesitate to protect people from reputational damage caused by people posting true information about them. Perhaps there's a case to be made when the information is cherry-picked or biased, or there's no opportunity to hear a fair response. But goodness, if we've learned anything from the last 18 months I hope it would include that sharing information about bad behaviour is sometimes a public good.
Idea for free (feel free to use, abuse, steal): a tool to automatize donations + birthday messages. Imagine a tool that captures your contacts and their corresponding birthdays from Facebook; then, you will make (or schedule) one (or more) donations to a number of charities, and the tool will customize birthday messages with a card mentioning that you donated $ in their honor and send it on their corresponding birthdays.
For instance: imagine you use this tool today; it’ll then map all the birthdays of your acquaintances for the next year. Then you’ll selec...
A couple of weeks ago I blocked all mentions of "Effective Altruism", "AI Safety", "OpenAI", etc from my twitter feed. Since then I've noticed it become much less of a time sink, and much better for mental health. Would strongly recommend!
There is still plenty of time to vote in the Donation Election. The group donation pot currently stands at around $30,000. You can nudge that towards the projects you think are most worthwhile (plus, the voting system is fun and might teach you something about your preferences).
Also- you should donate to the Donation Election fund if:
a) You want to encourage thinking about effective donations on the Forum.
b) You want to commit to donating in line with the Forum's preferences.
c) You'd like me to draw you one of these bad animals (or earn o...
(not well thought-out musings. I've only spent a few minutes thinking about this.)
In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn't want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven't we seen any indications of it?
On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don't see signs of extraterrestrial life because c...
Thoughts on the OpenAI Board Decisions
A couple months ago I remarked that Sam Bankman-Fried's trial was scheduled to start in October, and people should prepare for EA to be in the headlines. It turned out that his trial did not actually generate much press for EA, but a month later EA is again making news as a result of recent Open AI board decisions.
A couple quick points:
Yeah, makes sense. Although I just tried doing the "latest" sort and went through the top 40 tweets without seeing a reference to FTX/SBF.
My guess is that this filter just (unsurprisingly) shows you whatever random thing people are talking about on twitter at the moment, and it seems like the random EA-related thing of today is this, which doesn't mention FTX.
Probably you need some longitudinal data to have this be useful.
This December is the last month unlimited Manifold Markets currency redemptions for donations are assured: https://manifoldmarkets.notion.site/The-New-Deal-for-Manifold-s-Charity-Program-1527421b89224370a30dc1c7820c23ec
Highly recommend redeeming donations this month since there are orders of magnitude more currency outstanding than can be donated in future months
Millions of people contract pork tapeworm infections annually, which causes ~30% of the ~50 million global active epilepsy cases:
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)61353-2/fulltext
Perhaps cultural pork consumption restrictions are onto something:
https://en.wikipedia.org/wiki/Religious_restrictions_on_the_consumption_of_pork
I thought this recent study in JAMA Open on vegan nutrition was worth a quick take due to its clever and legible study design:
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2812392
This was an identical twin study in which one twin went vegan for eight weeks, and the other didn't. Nice results on some cardiometabolic lab values (e.g., LDL-C) even though the non-vegan twin was also upping their game nutritionally. I don't think the fact that vegan diets generally improve cardiometabolic health is exactly fresh news, but I find the study design to be unusually legible for nutritional research.
The following table is from Scott Alexander's post, which you should check out for the sources and (many, many) caveats.
...This table can’t tell you what your ethical duties are. I'm concerned it will make some people feel like whatever they do is just a drop in the bucket - all you have to do is spend 11,000 hours without air conditioning, and you'll have saved the same amount of carbon an F-35 burns on one airstrike! But I think the most important thing it could convince you of is that if you were previously planning on letting yourself be miserable t
I was watching the recent DealBook Summit interview with Elon Musk, and he said the following about OpenAI (emphasis mine):
...the reason for starting OpenAI was to create a counterweight to Google and DeepMind, which at the time had two-thirds of all AI talent and basically infinite money and compute. And there was no counterweight. It was a unipolar world. And Larry Page and I used to be very close friends, and I would stay at his house, and I would talk to Larry into the late hours of the night about AI safety. And it became apparent to me that Larry [Pag
By the time Musk (and Altman et al) was starting OA, it was in response to Page buying Hassabis. So there is no real contradiction here between being spurred by Page's attitude and treating Hassabis as the specific enemy. It's not like Page was personally overseeing DeepMind (or Google Brain) research projects, and Page quasi-retired about a year after the DM purchase anyway (and about half a year before OA officially became a thing).
"Profits for investors in this venture [ETA: OpenAI] were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025)."
I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn't find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI "proposing" the rule change.
If true, this would make the profit cap less meaningful, es...
I've talked to some people who are involved with OpenAI secondary markets, and they've broadly corroborated this.
One source told me that after a specific year (didn't say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.
I'm a little disheartened at all the downvotes on my last post. I believe an EA public figure used scientifically incorrect language in his public arguments for x-risk, and I put quite a bit of work into explaining why in a good faith and scientifically sourced manner. I'm particularly annoyed that a commenter (with relevant expertise!) was at one point heavily downvoted just for agreeing with me (I saw him at -16 at one point) . Fact checking should take precedence over fandoms.
Yeah, I think it would have been much better for him to say "proteins are shaped by..." rather than "proteins are held together by...", and to give some context for what that means. Seems fair to criticize his communication. But the quotes and examples in the linked post are more consistent with him understanding that and wording it poorly, or assuming too much of his audience, rather than him not understanding that proteins use covalent bonds.
The selected quotes do give me the impression Eliezer is underestimating what nature can accomplish relative to design, but I haven't read any of them in context so that doesn't prove much.