This is a special post for quick takes by Ethan Beri. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I've recently been writing a long-form post, and I realised that it's taking a while. I was sort of struck by the thought: is everyone doing this? When I see people talk about spending too much time on the forum (and I usually don't - I think I've only seen two or three people say this), it's usually to do with doom scrolling, or something like that. But might people also be spending a lot of time just writing stuff? I'm not sure of the concrete opportunity cost here, but I'm sure there's some. I'm not especially well-versed in the "meta trap" thing - I think this was a debate people had before I got interested in EA - but it seems like this is way it could (or does!) happen. Thoughts?

Yes, writing good stuff is hard.

It takes a lot of time, and is inadequately rewarded. Some people who write long-form stuff are exceedingly smart, so it's easier for them. Which is why easier-to-write & at-best-shallowly-researched stuff is the norm.

I'm not really too concerned with quality - I'm just much more worried about time. Actually, I think a lot of stuff on EA Forum/LW is really good, and I've learned a lot here. It just seems like an awful lot of people write an awful lot of stuff, and I'm not really sure everyone needs to spend so long writing. That said, I'm not sure how you'd fix this other than implementing something like the existing quick takes feature.

Hey - I’d be really keen to hear peoples' thoughts on the following career/education decision I'm considering (esp. people who think about AI a lot):

  • I’m about to start my undergrad studying PPE at Oxford.
  • I’m wondering whether re-applying this year to study CS & philosophy at Oxford (while doing my PPE degree) is a good idea.
    • This doesn’t mean I have to quit PPE or anything. 
    • I’d also have to start CS & philosophy from scratch the following year.
  • My current thinking is that I shouldn’t do this - I think it’s unlikely that I’ll be sufficiently good to, say, get into a top 10 ML PhD or anything, so the technical knowledge that I’d need for the AI-related paths I’m considering (policy, research, journalism, maybe software engineering) is either pretty limited (the first three options) or much easier to self-teach and less reliant on credentials (software engineering).
    • I should also add that I’m currently okay at programming anyway, and plan to develop this alongside my degree regardless of what I do - it seems like a broadly useful skill that’ll also give me more optionality.
    • I do have a suspicion that I’m being self-limiting re the PhD thing - if everyone else is starting from a (relatively) blank slate, maybe I’d be on equal footing? 
      • That said, I also have my suspicions that the PhD route is actually my highest-impact option: I’m stuck between 1) deferring to 80K here, and 2) my other feeling that enacting policy/doing policy research might be higher-impact/more tractable.
      • They’re also obviously super competitive, and seem to only be getting more so.
  • One major uncertainty I have is whether, for things like policy, a PPE degree (or anything politics-y/economics-y) really matters. I’m a UK citizen, and given the record of UK politicians who did PPE at Oxford, it seems like it might?

What mistakes am I making here/am I being too self-limiting? I should add that (from talking to people at Oxford) I’ll have quite a lot of time to study other stuff on the side during my PPE degree. Thanks for reading this, if you’ve got this far! I’d greatly appreciate any comments.

Nudge to seriously consider applying for 80,000 hours personal advising if you haven't already: https://80000hours.org/speak-with-us/

My guess is they'd be able to help you think this through!

There are masters programs in the UK that take non-CS students. Anecdata from friends is that they've done PPE at Oxford then an Imperial CS Masters. 

After a quick google, I'm pleasantly surprised by how much this sort of thing seems to happen - thanks for the pointer!

To be clear you should still ask more people and look at the downstream effects on PhDs, research, etc. Again would echo the advice for 80k and reaching out to other people. 

I don't know enogh about your situation to give a confident suggestion, but sounds like you could benefit a lot from talking to 80k, if you haven't done already! (Altough it might take some time, I'm not sure about their current capacity)

Hey, tough choice! Personally I’d lean towards PPE. Primarily that’s driven by the high opportunity cost of another year in school. Which major you choose seems less important than finding something you love and doing good work in it a year sooner.

Two other factors: First, you can learn AI outside of the classroom fairly well, especially since you can already program. I’m an economics major who’s taken a few classes in CS and done a lot of self-study, and that’s been enough to work on some AI research projects. Second, policy is plausibly more important for AI safety than technical research. There’s been a lot of government focus on slowing down AI progress lately, while technical safety research seems like it will need more time to prepare for advanced AI. The fact that you won’t graduate for a few years mitigates this a bit — maybe priorities will have changed by the time you graduate.

What would you do during a year off? Is it studying PPE for one year? I think a lot of the value of education comes from signaling, so without a diploma to show for it this year of PPE might not be worth much. If there’s a job or scholarship or something, that might be more compelling. Some people would suggest self-study, but I’ve spent time working on my own projects at home, and personally I found it much less motivating and educational than being in school or working.

Those are just my quick impressions, don’t lean too much on anyone (including 80K!). You have to understand the motivations for a plan for yourself in order to execute it well. Good luck, always happy to chat about stuff.

Hey, thanks for your comment! I hadn't really realised the extent to which someone can study full-time while also skilling up in research engineering - that definitely makes me feel more willing to go for PPE. 

Re your third paragraph, I wouldn't have a year off - it'd just be like doing a year of PPE, followed by three years of CS & philosophy. I do have a scholarship, and would do the first year of PPE anyway in case I didn't get into CS & phil.

Either way, your first point does point me more in the direction of just sticking with PPE :)

Ah okay, if it doesn't delay your graduation then I'd probably lean more towards CS. Self study can be great, but I've found classes really valuable too in getting more rigorous. Of course there's a million factors I'm not aware of -- best of luck in whichever you choose!

Would an AI governance book that covered the present landscape of gov-related topics (maybe like a book version of the FHI's AI Governance Research Agenda?) be useful?

We're currently at a weird point where there's a lot of interest in AI - news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I'm well aware that this sort of thing can backfire, and I'm aware that most people are highly sceptical of trying not to "politicise" issues like these, but it might be a good idea.

If it was written by, say, Toby Ord - or anyone sufficiently detached from American left/right politics, with enough prestige, background, and experience with writing books like these - I feel like it might be really valuable.

It might also be more approachable than other books covering AI risk, like, say, Superintelligence. It might also seem a little more concrete, because it might cover scenarios that are easier for most people to imagine/scenarios that are more near-term, and less "sci-fi".

Thoughts on this? 

I think this would be a great idea! I would be curious to know in case someone is working on something like this already, and if not, it would be great to have this.

In my understanding, going from manuscript completion to publication probably takes 1-2 years. This is long enough that new developments in AI capabilities/regulations/treaties would come about, but worse, AI governance is a fast growing academic field right now. I imagine that the state-of-the-art in AI gov research/analysis frameworks could look quite different in a couple of years.

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T