This is a special post for quick takes by harfe. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
harfe
51
7
0
9

Consider donating all or most of your Mana on Manifold to charity before May 1.

Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.

Also this part might be relevant for people with large positions they want to sell now:

One week may not be enough time for users with larger portfolios to liquidate and donate. We want to work individually with anyone who feels like they are stuck in this situation and honor their expected returns and agree on an amount they can donate at the original 100:1 rate past the one week deadline once the relevant markets have resolved.

I just donated $65 to Shrimp Welfare Project :)

Sadly even slightly worse than 10x devaluation because 1,000 mana will redeem for $0.95 to cover "credit card fees and administrative work"

That Notion link doesn't work for me FYI :) But this one did (from their website)

Thanks for sharing this on the Forum! 
If you (the reader) have donated your mana because of this quick take, I'd love it if you put a react on this comment. 

[comment deleted]0
0
0

Seems a shame. My understanding was they did good work.

more discussion at forum post

Based on the timing, how likely is it that this was a partial consequence of Bostrom's personal controversies?

I can't imagine it helped in winning allies in Oxford, but relationship with Faculty/University was already highly dysfunctional. (I was consulted as part of a review re: FHI's position within Oxford and various options before said personal controversies). 

Thank you! I framed it as a question for this reason ❤️

Nick Bostrom's website now lists him as "Principal Researcher, Macrostrategy Research Initiative."

Doesn't seem like they have a website yet.

Except they should maximize confusion by calling it the "Macrostrategy Interim Research Initiative" ;)

I think I'm sympathetic to Oxford's decision.

By the end, the line between genuine scientific inquiry and activistic 'research' got quite blurry at FHI. I don't think papers such as: 'Proposal for a New UK National Institute for Biological Security', belong in an academic institution, even if I agree with the conclusion.

For the disagree voters (I didn't agreevote either way) -- perhaps a more neutral way to phrase this is might be:

Oxford and/or its philosophy department apparently decided that continuing to be affiliated with FHI wasn't in its best interests. It seems this may have developed well before the Bostrom situation. Given that, and assuming EA may want to have orgs affiliated with other top universities, what lessons might be learned from this story? To the extent that keeping the university happy might limit the org's activities, when is accepting that compromise worth it?

David T
27
13
0
1

I also didn't vote but would be very surprised if that particular paper - a policy proposal for a biosecurity institute in the context of a pandemic - was an example of the sort of thing Oxford would be concerned about affiliating with (I can imagine some academics being more sceptical of some of the FHI's other research topics). Social science faculty academics write papers making public policy recommendations on a routine basis, many of them far more controversial.

The postmortem doc says "several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received" which suggests it might be internal messaging that lost them friends and alienated people. It'd be interesting if there are any specific lessons to be learned, but it might well boil down to academics being rude to each other, and the FHI seems to want to emphasize it was more about academic politics than anything else.

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 4m read
 · 
This post presents the executive summary from Giving What We Can’s impact evaluation for the 2023–2024 period. At the end of this post we share links to more information, including the full report and working sheet for this evaluation. We look forward to your questions and comments! This report estimates Giving What We Can’s (GWWC’s) impact over the 2023–2024 period, expressed in terms of our giving multiplier — the donations GWWC caused to go to highly effective charities per dollar we spent. We also estimate various inputs and related metrics, including the lifetime donations of an average 🔸10% pledger, and the current value attributable to GWWC and its partners for an average 🔸10% Pledge and 🔹Trial Pledge.  Our best-guess estimate of GWWC’s giving multiplier for 2023–2024 was 6x, implying that for the average $1 we spent on our operations, we caused $6 of value to go to highly effective charities or funds.  While this is arguably a strong multiplier, readers may wonder why this figure is substantially lower than the giving multiplier estimate in our 2020–2022 evaluation, which was 30x. In short, this mostly reflects slower pledge growth (~40% lower in annualised terms) and increased costs (~2.5x higher in annualised terms) in the 2023–2024 period. The increased costs — and the associated reduction in our giving multiplier — were partly due to one-off costs related to GWWC’s spin-out. They also reflect deliberate investments in growth and the diminishing marginal returns of this spending. We believe the slower pledge growth partly reflects slower growth in the broader effective altruism movement during this period, and in part that GWWC has only started shifting its strategy towards a focus on pledge growth since early 2024. We’ve started seeing some of this pay off in 2024 with about 900 new 🔸10% Pledges compared to about 600 in 2023.  All in all, as we ramp up our new strategy and our investments start to pay off, we aim and expect to sustain a strong (a
 ·  · 6m read
 · 
TLDR: This 6 million dollar Technical Support Unit grant doesn’t seem to fit GiveWell’s ethos and mission, and I don’t think the grant has high expected value. Disclaimer: Despite my concerns I still think this grant is likely better than 80% of Global Health grants out there. GiveWell are my favourite donor, and given how much thought, research, and passion goes into every grant they give, I’m quite likely to be wrong here!   What makes GiveWell Special? I love to tell people what makes GiveWell special. I giddily share how they rigorously select the most cost-effective charities with the best evidence-base. GiveWell charities almost certainly save lives at low cost – you can bank on it. There’s almost no other org in the world where you can be pretty sure every few thousand dollars donated be savin’ dem lives. So GiveWell Gives you certainty – at least as much as possible. However this grant supports a high-risk intervention with a poor evidence base. There are decent arguments for moonshot grants which try and shift the needle high up in a health system, but this “meta-level”, “weak evidence”, “hits-based” approach feels more Open-Phil than GiveWell[1]. If a friend asks me to justify the last 10 grants GiveWell made based on their mission and process, I’ll grin and gladly explain. I couldn’t explain this one. Although I prefer GiveWell’s “nearly sure” approach[2], it could be healthy to have two organisations with different roles in the EA global Health ecosystem. GiveWell backing sure things, and OpenPhil making bets.   GiveWell vs. OpenPhil Funding Approach What is the grant? The grant is a joint venture with OpenPhil[3] which gives 6 million dollars to two generalist “BINGOs”[4] (CHAI and PATH), to provide technical support to low-income African countries. This might help them shift their health budgets from less effective causes to more effective causes, and find efficient ways to cut costs without losing impact in these leaner times. Teams of 3-5