This is a special post for quick takes by Bentham's Bulldog. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I had an idea for a potential way for EA to gain more prominence among people who are likely to be productive in the movement. I do high school debate and in high school debate, debaters go very deep into literature around arguments that are useful in debates. For example, baudrillard is pretty obscure, yet debaters go incredibly in depth doing hours and hours of research about baudrillard specifically. The current debate topic in the event where there's the most in depth research is "The United States federal government should enact substantial criminal justice reform in one or more of the following areas sentencing, policing, forensics." Given debates obsession with branches of literature that are relevant to topics, and cjr being an aim of the EA movement, it seems like this would be a promising area for EA to gain traction. This has already happened to some degree with the long termist EA arguments. Bostroms arguments about prioritizing existential threats, along with Pummer and others are ubiquitous in debate. If some EA's focused on making literature about criminal justice reforms that could help increase our popularity. There are a few specific areas that I thought of

1 writing about specific proposals that are desirable for the federal government specifically to enact. This would cause debaters to frequently cite potential proposals from EA's.

2 Writing about why reform is important. One prominent argument on this topic is that we shouldn't reform the system, we should only opt for abolition. EA could take a stance against this position and gain popularity, especially if there was really good evidence that provided overwhelming empirics about the effectiveness of reform. Or conversely, if the concensus of empirical literature is that abolition is better than reform, EA's writing about abolition being desirable could be effective in terms of being used in debates.

3 arguments about the political impacts of criminal justice reform and how that could effect existential threats. Given the very broad topic debaters often prepare generics that apply to most affs, and go incredibly in depth on those. Some of the most common generics involve arguments over the political or electoral impact of criminal justice reform. To the extent that anyone writes about this it will likely be read many times in debate.

THroughout these three areas it will be important to mention other EA ideas. Articles should potentially provide information about these things while referencing other EA ideas. This could potentially have a positive impact. I found out about EA through debate, and many debaters very vaguely know about EA. Debaters would potentially be very influenced by EA ideas. In debate, utilitarianism is almost universally accepted to be the framework to be used for determining what should be done. A group of educated young utilitarians seem like a promising field for EA to expand into. What do you all think?

I like this! I would recommend polishing it into a top level post.

Have there been any efforts from EAs to look into increasing the speed of space colonization. It seems potentially desirable in terms of serving as a bullwark against existential threats.

Here's an EA Global talk on the subject. I find it uncompelling. It's extraordinarily expensive, and does little to protect against the X-risks I'm most concerned about, namely AI risk and engineered pandemics.

I recently did a debate with a critic of effective altruism.  It got reasonable reception so far, so for those who think that this is a useful ea activity, feel free to give it a share.  

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T