This is a special post for quick takes by Arne. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Dear forum,

I was wondering if the repugnant conclusion could be responded by an argument of the following form: 

Considering planet earth and a given happiness distribution of its citizens with total happiness h, there is simply not enough space or resources or whatsoever to let an arbitrary large number of people n live with an average amount of happiness epsilon, such that n * epsilon > h. At even larger scales, the observable universe is finite and thus for the same reason as above n does not need to exist.

What do you think of such an argument?

I am not sure, whether the nature of the repugnant conclusion is really affected by such an argument. Can you help me to understand?

The repugnant conclusion is presented as an objection to certain views in population axiology. The claim is that a possible world containing sufficiently many morally relevant beings just above neutrality is intrinsically better than a possible world with arbitrarily many beings arbitrarily happy. The claim is not that these worlds could become actual, so empirical considerations of the sort you describe aren't relevant for assessing the force of the objection.

Put differently, theories like total utilitarianism imply that the "repugnant" world would be better if it existed, and the objection is that this implication is implausible. The implausibility would remain even if it was shown that the "repugnant" world cannot exist.

Thank you very much, you put it words, what I could not. Your answer gave me not only the assurance that my doubts were justified, but also some confidence to ask more questions of that kind.Thank you. 

And thank you as well for the short, but helpful answer. The relevance of the thought of mine for philosophy gives also confidence to that thinking. 

Btw we have a some friends in common of which I am aware: EdoArad -> (Shay ben moshe) -> Amit -> Arne 

^^ 

Cool! Through data science I guess? 

Yup, through effectivethesis precisely 

Why isn't the destruction of the patriarchy considered a cause area in Effective Altruism? A search of the 80,000 Hours website yields only two results, both of which are podcast transcripts. It's not listed among the problems in their cause prioritization list. Has this issue not been investigated at all? If not, why?

Could political concerns be a factor? If so, doesn’t that raise questions, given that cause neutrality is a core principle of Effective Altruism? What other reasons might explain its absence?

There are a lot of possible causes in the world. It's generally more productive to present a rough back-of-the-envelope calculation to suggest why you think a cause might plausibly be one of the most cost-effective ways of improving the world, rather than jumping straight to casting aspersions on others.

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 4m read
 · 
Earlier this year, we launched a request for proposals (RFP) from organizations that fundraise for highly cost-effective charities. The Livelihood Impact Fund supported the RFP, as did two donors from Meta Charity Funders. We’re excited to share the results: $1,565,333 in grants to 11 organizations. We estimate a weighted average ROI of ~4.3x across the portfolio, which means we expect our grantees to raise more than $6 million in adjusted funding over the next 1-2 years.   Who’s receiving funding These organizations span different regions, donor audiences, and outreach strategies. Here’s a quick overview: Charity Navigator (United States) — $200,000 Charity Navigator recently acquired Causeway, through which they now recommend charities with a greater emphasis on impact across a portfolio of cause areas. This grant supports Causeway’s growth and refinement, with the aim of nudging donors toward curated higher-impact giving funds. Effectief Geven (Belgium) — $108,000 Newly incubated, with solid early traction and plans to expand donor reach. This grant will help them expand from 1 to 1.5 FTE. Effective Altruism Australia (Australia) — $257,000 A well-established organization with historically strong ROI. This grant supports the hiring of a dedicated director for their effective giving work, along with shared ops staff, over two years. Effective Altruism New Zealand (New Zealand) — $17,500 A long-standing, low-cost organization with one FTE and a consistently great ROI. This grant covers their core operating expenses for one year, helping to maintain effective giving efforts in New Zealand. Etkili Bağış (Turkey) — $20,000 A new initiative piloting effective giving outreach in Turkey. This grant helps professionalize their work by covering setup costs and the executive director’s time for one year. Giv Effektivt (Denmark) — $210,000 A growing national platform that transitioned from volunteer-run to staffed, with strong early ROI and healthy signs of growth.