Quick takes

I'm a little disheartened at all the downvotes on my last post. I believe an EA public figure used scientifically incorrect language in his public arguments for x-risk, and I put quite a bit of work into explaining why in a good faith and scientifically sourced manner. I'm particularly annoyed that a commenter (with relevant expertise!) was at one point heavily downvoted just for agreeing with me (I saw him at -16 at one point) . Fact checking should take precedence over fandoms. 

Showing 3 of 9 replies (Click to show all)
3
Nick K.
16h
That's fair enough and levels of Background understanding vary (I don't have a relevant PhD either), but then the criticism should be about this point being easily misunderstood rather than making a big deal about the strawman position being factually wrong. In which case it would also be much more constructive than adversarial criticism.
5
Rebecca
10h
I think part of titotal’s point is it’s not the ‘strawman’ interpretation but the straightforward one, and having it framed that way would understandably be frustrating. It sounds like he also disagrees with Eliezer’s actual, badly communicated argument [edit: about the size of potential improvements on biology] anyway though? Based on the response to Habryka

Yeah, I think it would have been much better for him to say "proteins are shaped by..." rather than "proteins are held together by...", and to give some context for what that means. Seems fair to criticize his communication. But the quotes and examples in the linked post are more consistent with him understanding that and wording it poorly, or assuming too much of his audience, rather than him not understanding that proteins use covalent bonds. 

The selected quotes do give me the impression Eliezer is underestimating what nature can accomplish relative to design, but I haven't read any of them in context so that doesn't prove much. 

Bumping a previous EA forum post: Key EA decision-makers on the future of EA, reflections on the past year, and more (MCF 2023).

This post recaps a survey about EA 'meta' topics (eg., talent pipelines, community building mistakes, field-building projects, etc.) that was completed by this year's Meta Coordination Forum attendees. Meta Coordination Forum is an event for people in senior positions at community- and field-building orgs/programs, like CEA, 80K, and Open Philanthropy's Global Catastrophic Risk Capacity Building team. (The event has previously gon... (read more)

Showing 3 of 4 replies (Click to show all)

Glad you bumped this Michel, I was also surprised by how little attention it received. 

You requested feedback, so I hope the below is useful.

High level: we've been working on our strategy for 2024. I was expecting these posts to be very, very helpful for this. However, for some reason, they've only been slightly helpful. Below I've listed a few suggestions for what might have made them more helpful (if this info is contained in the posts and I've missed it I apologise in advance):

  1. Information to help us decide how to allocate our social change portfoli
... (read more)
8
Habryka
5d
(As someone who filled out the survey, I thought the framing of the questions was pretty off, and I felt like that jeopardized a lot of the value of the questions. I am not sure how much better you can do, I think a survey like this is inherently hard, but I at least don't feel like the survey results would help someone understand what I think much better)
2
OllieBase
5d
Thanks, Oli. Yes, I don't think we nailed it with the questions and as you say, that's always hard to do. Appreciate you adding this context for readers.

Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.

8
JWS
3d
I'm sorry you encountered this, and I don't want to minimise your personal experience I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention. On the whole though, i've found the EA community (both online and those I've met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartiality and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal) I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don't think those mean that the community as a whole, or even the sub-section, are 'anti-LGBT' and 'anti-trans', and I think there are historical and multifacted reasons why there's some emnity between 'progressive' and 'EA' camps/perspectives. Nevertheless, I'm sorry that you experience this sentiment, and I hope you're feeling ok.

The progressive and/or leftist perspective on LGB and trans people offers the most forthright argument for LGB and trans equality and rights. The liberal and/or centre-left perspective tends to be more milquetoast, more mealy-mouthed, more fence-sitting.

If you voted in the Donation Election, how long did it take you? (What did you spend the most time on?)

I'd be really grateful for quick notes. (You can also private message me if you prefer.) 

Showing 3 of 6 replies (Click to show all)

I think around 5-10 mins? I tried to compare everything I cared at all about, so I only used multipliers between 0 and 2 (otherwise I would have lost track and ended up with intransitive preferences). The comparison stage took the most time. I edited things in the end a little bit, downgrading some charities to 0.

3
Will Howard
3d
It took me ~1 minute. I already had a favourite candidate so I put all my points towards that. I was half planning to come back and edit to add backup choices but I've seen the interim results now so I'm not going to do that.
4
Jason
3d
3-4 minutes, mostly on playing through various elimination-order scenarios in my head and trying to ensure that my assigned values would still reflect my preferences in at least more likely scenarios.
Kaleem
3d36
1
0
1
6

EZ#1

The world of Zakat is really infuriating/frustrating. There is almost NO accountability/transparency demonstrated by orgs which collect and distribute zakat - they don't seem to feel any obligation to show what they do with what they collect. Correspondingly, nearly every Muslim I've spoken to about zakat/effective zakat has expressed that their number 1 gripe with zakat is the strong suspicion that it's being pocketed or corruptly used by these collection orgs.

Given this, it seems like there's a really big niche in the market to be exploited by an EA-... (read more)

Showing 3 of 7 replies (Click to show all)
4
Rebecca
3d
I’m not sure how I feel about this as a pathway, given the requirement that zakat donations only go to other people within the religion. On the one hand, it sounds like any charity that is constrained in this way in terms of recipients but had non-Muslim employees/contractors, would have to be subsidised by non-zakat donations (based on the GiveDirectly post linked in another comment). It also means endorsing a rather narrow moral circle, whereas potentially it might be more impactful to expend resources trying expand that circle than to optimise within it. Otoh, it does cover a whole quarter of humanity, and so potentially a lot of low hanging fruit can be gained without correspondingly slowing moral circle expansion.

I don't think helping people who feel an obligation to give zakat do so in the most effective way possible would constitute "endorsing" the awarding of strong preference to members of one's religion as recipients of charity. It merely recognizes that the donor has already made this precommitment, and we want their donation to be as effective as possible given that precommitment.

7
Larks
3d
Some previous discussion here. 

One of the canonical EA books (can't remember which) suggests that if an individual stops consuming eggs (for example), almost all the time this will have zero impact, but there's some small probability that on some occasion it will have a significant impact. And that can make it worthwhile.

I found this reasonable at the time, but I'm now inclined to think that it's a poor generalization where the expected impact still remains negligible in most scenarios. The main influence for my shift is when I think about how decisions are made within organizations, an... (read more)

Showing 3 of 6 replies (Click to show all)
3
ag4000
4d
Thanks, this makes things much clearer to me. I agree that this style of reasoning depends heavily on the context studied (in particular, the mechanism at play), and that we can't automatically use numbers from one situation for another.  I also agree with what I take to be your main point: In many situations, the impact is less than 1:1 due to feedback loops and so on. I'm still not sure I understand the specific examples you provide: * Animal products used as food: For commonly-consumed food animal products, I would be surprised if the numbers were much lower than those in the table from Compassion by the Pound (assuming that those numbers are roughly correct).  This is because the mechanism used to change levels of production is similar in these cases.  (The previous sentence is probably naive, so I'm open to corrections.)  However, your point about substitution across goods (e.g., from beef to chicken) is well taken. * Other animal products: Not one of the examples you gave, but one material that's interested me is cow leather.  I'm guessing that (1) much of leather is a byproduct* of beef production and (2) demand for leather is relatively elastic.  Both of these suggest that abstaining from buying leather goods has a fairly small impact on farmed animal welfare suffering.**  * Voting: I am unsure what you mean here by "1:1".  Let me provide a concrete example, which I take to be the situation you're talking about.  We have an election with n voters and 2 candidates, with the net benefit of the better candidate winning U.  If all voters were to vote for the better candidate, then each person's average impact is U / n.  I assume that this is what you mean by the "1" in "1:1": if someone has expected counterfactual impact U / n, then their impact is 1:1.  If this is what you mean by 1:1, then actually one's impact can easily be greater than U / n, going against your claim.  For example, if your credence on the better candidate winning is exactly 50%, then U
1
VictorW
4d
I'm unclear on the exact mechanism and suspect that the anecdote of "the manager sees the reduced demand across an extended period and decides to lower their store's import by the exact observed reduction" is a gross oversimplification of what I would have guessed is a complex system where the manager isn't perfectly rational, may have long periods without review due to contractual reasons, the supply chain lasting multiple parties all with non-linear relationships. Maybe some food supply chains significantly differ at the grower's end, or in different countries. My missing knowledge here is why I don't think I have a good reason to assume generality. Other animal products I think your cow leather example highlights the idea that for me threatens simplistic math assumptions. Some resources are multi-purpose, and can be made into different products through different processes and grades of quality depending on the use case. It's pretty plausible that eggs are either used for human consumption or hatching. Some animal products might be more complicated and be used for human consumption or non-human consumption or products in other industries. It seems reasonable for me to imagine a case where decreasing human consumption results in wasted production which "inspires" someone to redirect that production to another product/market which becomes successful and results in increased non-dietary demand. I predict that this isn't uncommon and could dilute some of the marginal impact calculations which are true short-term but might not play out long-term. (I'm not saying that reducing consumption isn't positive expectation, I'm saying that the true variance of the positive could be very high over a long-term period that typically only becomes clear in retrospect.) Voting Thanks for that reference from Ord. I stand updated on voting in elections. I have lingering skepticism about a similar scenario that's mathematically distinct: petition-like scenarios. E.g. if 100k people

I agree that the simple story of a producer reacting to changing demand directly is oversimplified.  I think we differ in that I think that absent specific information, we should assume that any commonly consumed animal product's supply response to changing demand should be similar to the ones from Compassion, by the Pound. In other words, we should have our prior on impact centered around some of the numbers from there, and update from there.  I can explain why I think this in more detail if we disagree on this.

Leather example:

Sure, I chose this... (read more)

Does anyone have a resource that maps out different types/subtypes of AI interpretability work?

E.g. mechanistic interpretability and concept-based interpretability, what other types are there and how are they categorised?

Showing 3 of 4 replies (Click to show all)
4
ag4000
4d
Late to the party here but I'd check out Räuker et al. (2023), which provides one taxonomy of AI interpretability work.
1
VictorW
4d
Brilliant, thank you. One of the very long lists of interp work on the forum seemed to have everything as mech interp (or possibly I just don't recognize alternative key words). Does the EA AI safety community feel particularly strongly about mech interp or is it just my sample size being too small?

Not an expert, but I think your impression is correct.  See this post, for example (I recommend the whole sequence).

Moderation updates

Showing 3 of 42 replies (Click to show all)

I think we should hesitate to protect people from reputational damage caused by people posting true information about them. Perhaps there's a case to be made when the information is cherry-picked or biased, or there's no opportunity to hear a fair response. But goodness, if we've learned anything from the last 18 months I hope it would include that sharing information about bad behaviour is sometimes a public good.

6
pseudonym
11d
Fair point about reputational harms being worse and possibly too punishing in some cases. I think in terms of a proposed standard it might be worth differentiating (if possible) between e.g. careless errors, or momentary lapses in judgement that were quickly rectified and likely caused no harm in expectation, versus a pattern of dishonest voting intended to mislead the EAF audience, and especially if they or an org that they work for stand to gain from it, or the comments in question are directly harmful to another org. In these latter cases the reputational harm may be more justifiable.
2
pseudonym
11d
Corrected, thanks!

Idea for free (feel free to use, abuse, steal): a tool to automatize donations + birthday messages. Imagine a tool that captures your contacts and their corresponding birthdays from Facebook; then, you will make (or schedule) one (or more) donations to a number of charities, and the tool will customize birthday messages with a card mentioning that you donated $ in their honor and send it on their corresponding birthdays.

For instance: imagine you use this tool today; it’ll then map all the birthdays of your acquaintances for the next year. Then you’ll selec... (read more)

A couple of weeks ago I blocked all mentions of "Effective Altruism", "AI Safety", "OpenAI", etc from my twitter feed. Since then I've noticed it become much less of a time sink, and much better for mental health. Would strongly recommend!

throw e/acc on there too

There is still plenty of time to vote in the Donation Election. The group donation pot currently stands at around $30,000. You can nudge that towards the projects you think are most worthwhile (plus, the voting system is fun and might teach you something about your preferences). 

Also- you should donate to the Donation Election fund if: 
a) You want to encourage thinking about effective donations on the Forum.
b) You want to commit to donating in line with the Forum's preferences. 
c) You'd like me to draw you one of these bad animals (or earn o... (read more)

3
Kirsten
5d
Voted because of this, thanks for the nudge!

Thanks for letting me know Kirsten! Good way to start the day :)

11
Lizka
5d
Relatedly, here are some Manifold Markets about whether the Donation Election Fund will reach:  1. $40K 2. $50K 3. $75K 4. $100K

(not well thought-out musings. I've only spent a few minutes thinking about this.)

In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn't want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven't we seen any indications of it? 

On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don't see signs of extraterrestrial life because c... (read more)

Utilitarianism.net is currently down.

Looks okay to me now. How is it for you? 

Thoughts on the OpenAI Board Decisions

A couple months ago I remarked that Sam Bankman-Fried's trial was scheduled to start in October, and people should prepare for EA to be in the headlines. It turned out that his trial did not actually generate much press for EA, but a month later EA is again making news as a result of recent Open AI board decisions.

A couple quick points:

  1. It is often the case that people's behavior is much more reasonable than what is presented in the media. It is also sometimes the case that the reality is even stupider than what is
... (read more)
Showing 3 of 15 replies (Click to show all)
2
Nathan Young
9d
I would guess too that these two events have made it much easier to reference EA in passing. eg I think this article wouldn't have been written 18 months ago. https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362 So I think there is a real jump of notoriety once the journalistic class knows who you are. And they now know who we are. "EA, the social movement involved in the FTX and OpenAI crises" is not a good epithet.
4
Habryka
11d
Top was mostly showing me tweets from people that I follow, so my sense is it was filtered in a personalized way. I am not fully sure how it works, but it didn't seem the right type of filter.

Yeah, makes sense. Although I just tried doing the "latest" sort and went through the top 40 tweets without seeing a reference to FTX/SBF.

My guess is that this filter just (unsurprisingly) shows you whatever random thing people are talking about on twitter at the moment, and it seems like the random EA-related thing of today is this, which doesn't mention FTX.

Probably you need some longitudinal data to have this be useful.

This December is the last month unlimited Manifold Markets currency redemptions for donations are assured: https://manifoldmarkets.notion.site/The-New-Deal-for-Manifold-s-Charity-Program-1527421b89224370a30dc1c7820c23ec

Highly recommend redeeming donations this month since there are orders of magnitude more currency outstanding than can be donated in future months

Millions of people contract pork tapeworm infections annually, which causes ~30% of the ~50 million global active epilepsy cases:
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)61353-2/fulltext

Perhaps cultural pork consumption restrictions are onto something:
https://en.wikipedia.org/wiki/Religious_restrictions_on_the_consumption_of_pork
 

I thought this recent study in JAMA Open on vegan nutrition was worth a quick take due to its clever and legible study design:

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2812392 

This was an identical twin study in which one twin went vegan for eight weeks, and the other didn't. Nice results on some cardiometabolic lab values (e.g., LDL-C) even though the non-vegan twin was also upping their game nutritionally. I don't think the fact that vegan diets generally improve cardiometabolic health is exactly fresh news, but I find the study design to be unusually legible for nutritional research.

The following table is from Scott Alexander's post, which you should check out for the sources and (many, many) caveats. 

This table can’t tell you what your ethical duties are. I'm concerned it will make some people feel like whatever they do is just a drop in the bucket - all you have to do is spend 11,000 hours without air conditioning, and you'll have saved the same amount of carbon an F-35 burns on one airstrike! But I think the most important thing it could convince you of is that if you were previously planning on letting yourself be miserable t

... (read more)

I was watching the recent DealBook Summit interview with Elon Musk, and he said the following about OpenAI (emphasis mine):

the reason for starting OpenAI was to create a counterweight to Google and DeepMind, which at the time had two-thirds of all AI talent and basically infinite money and compute. And there was no counterweight. It was a unipolar world. And Larry Page and I used to be very close friends, and I would stay at his house, and I would talk to Larry into the late hours of the night about AI safety. And it became apparent to me that Larry [Pag

... (read more)

By the time Musk (and Altman et al) was starting OA, it was in response to Page buying Hassabis. So there is no real contradiction here between being spurred by Page's attitude and treating Hassabis as the specific enemy. It's not like Page was personally overseeing DeepMind (or Google Brain) research projects, and Page quasi-retired about a year after the DM purchase anyway (and about half a year before OA officially became a thing).

"Profits for investors in this venture [ETA: OpenAI] were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025)."


I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn't find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI "proposing" the rule change.

If true, this would make the profit cap less meaningful, es... (read more)

I've talked to some people who are involved with OpenAI secondary markets, and they've broadly corroborated this.

One source told me that after a specific year (didn't say when), the cap can increase 20% per year, and the company can further adjust the cap as they fundraise.

1
trevor1
8d
As of January 2023, the institutional markets were not predicting AGI within 30 years.
Load more