This is a special post for quick takes by Daniel_Eth. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Should there be an "EA Donation Index Fund" that allows people to simply "donate the market" (similar to how index funds like the S&P500 allow for simply buying the market)? This fund could allocate donations to EA orgs in proportion to the total donations that those funds receive (from EA sources?) over the year (it would perhaps make sense for there to be a few such funds – such as one for EA as a whole, one for longtermism, one for global health and development, etc).

I see a few potential benefits:
• People who want to donate effectively (and especially if wanting to diversify donations) but don't have the knowledge/expertise/time/etc, and for whatever reason don't necessarily trust EA funds to donate appropriately on their behalf, could do so. I expect there may be many people holding back from donating now for lack of a sense of how to donate best (including from people on the periphery of EA), so this might increase donations. I further expect the quality of donations would increase from those not as knowledgable, if they simply donated the market.
• Could be lower overhead and more scalable compared to other funds.
• Aesthetically, I'd imagine this sort of setup might appeal to finance people, and finance people have a lot of money, so it may widen to pool of donors to EA.
• Index fund donations would effectively be matching donations – if, for instance, half of all EA donations were through an EA index fund, then that would mean direct donations to specific charities would be matched by moving money from the index fund towards the specific charity as well (of course, at the expense of other charities in the fund) – this would arguably provide greater incentive for direct donors to donate more (at least insofar as they thought they knew more than/had better values than the market, but that would be their revealed preference from choosing to be direct donors instead of just donating to the index fund).

A good "default option" that might look like this (and some other similar ideas) is something we are looking at with GWWC.

How would you define which things were in the fund and which weren't?

Presumably someone (or a group) would have to create a list (potentially after creating an explicit set of criteria), and then the list would be updated periodically (say, yearly). 

How does that differ from the current funds (Givewell Maximum impact). 

If it's gonna be just matching the current giving, while I wouldn't give to it, I can imagine some would like it and it would be a pretty good fund, so fair, I guess.

I think you answered your own question? The index fund would just allocate in proportion to current donations, reducing both overhead for fund managers and the necessity to trust the managers' judgement (other than for deciding which charities do/don't qualify to begin with). I'd imagine the value of the index fund might increase as EA grows and the number of manager-directed funds increases (as many individual donors wouldn't know which direct fund to give to, and the index fund would track donations as a whole, including to direct funds).

Should the forum limit the number of strong (up/down) votes per person (say, per week)? Right now, people can use as many strong votes as they want, which somewhat decreases the signal they're intended to send (and also creates a bias in favor of those who "strategically" choose to overuse strong votes). Not sure if this is influencing the discourse at all but seems plausible.

Think this is a good idea.

I think it would be better if more EA job postings listed the salary range instead of simply saying "competitive wages". I honestly don't know if "competitive wages" implies $80k/yr or $250k/yr (is it supposed to be competitive with SF tech salaries, with other nonprofits outside of EA, with other EA orgs, or something else?) – this is an incredibly wide range which doesn't provide much use to applicants. 

Isn't this much worse than EA funds?

Not just EA funds, I think (almost?) all random, uninformed EA donations would be much better than donations to an Index fund considering all charities on Earth. 

I meant EA funds with a lowercase "f"

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
Although some of the jokes are inevitably tasteless, and Zorrilla is used to set up punchlines, I enjoyed it and it will surely increase concerns and donations for shrimp. I'm not sure what impression the audience will have of EA in general.  Last week The Daily Show interviewed Rutger Bregman about his new book Moral Ambition (which includes a profile of Zorrilla and SWP). 
 ·  · 2m read
 · 
Americans, we need your help to stop a dangerous AI bill from passing the Senate. What’s going on? * The House Energy & Commerce Committee included a provision in its reconciliation bill that would ban AI regulation by state and local governments for the next 10 years. * Several states have led the way in AI regulation while Congress has dragged its heels. * Stopping state governments from regulating AI might be okay, if we could trust Congress to meaningfully regulate it instead. But we can’t. This provision would destroy state leadership on AI and pass the responsibility to a Congress that has shown little interest in seriously preventing AI danger. * If this provision passes the Senate, we could see a DECADE of inaction on AI. * This provision also violates the Byrd Rule, a Senate rule which is meant to prevent non-budget items from being included in the reconciliation bill.   What can I do? Here are 3 things you can do TODAY, in order of priority: 1. (5 minutes) Call and email both of your Senators. Tell them you oppose AI preemption, and ask them to raise a point of order that preempting state AI regulation violates the Byrd Rule. * Find your Senators here. * Here’s an example of a call:  “Hello, my name is {YOUR NAME} and I’m a resident of {YOUR STATE}. The newest budget reconciliation bill includes a 10-year ban pre-empting state AI legislation without establishing any federal guardrails. This is extremely concerning to me – leading experts warn us that AI could cause mass harm within the next few years, but this provision would prevent states from protecting their citizens from AI crises for the next decade. It also violates the Byrd Rule, since preempting state AI regulation doesn’t impact federal taxes or spending. I’d like the Senator to speak out against this provision and raise a point of order that this provision should not be included under the Byrd Rule.” See here for sample call + email temp