This is a special post for quick takes by Liakias. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Could the latent effects of Covid worsen AI alignment efforts and/or other x-risk responses?

This is very much a 'I suspect (and hope) I'm wrong' question, but I thought it was still worth checking the rationale for this not being seen as a major issue. Essentially, is it likely that the long-term and latent effects of Covid on cognitive performance could significantly damage global responses to x-risks?

With a studies finding cognitive decline and brain shrinkage after even mild Covid infections (with IQ drops higher than stroke patients in some severe cases) and Omicron variants, though less deadly, apparently still causing greater brain apoptosis (of many previously healthy cells) than previous variants, is it possible that mass infection could be causing some level of general cognitive decline? Or, if this is happening, to some extent, to most people, with mass infection, are we not even noticing the extent of this decline?

If so, even if this is a pretty small or even negligible decline in most cases, if the raw ability to handle cognitive complexity is an important aspect of making effective political decisions, could small (and therefore particularly unnoticed) but en masse cognitive declines be enough to negatively tip the balance in responses to existing x-risks?

Add in potential further declines from repeat infections and cumulative damage, and might key political decision-makers have unrecognised, biologically worsened responses to AI policy during a crucial period for the field? 

Equally, could this affect responses to other, perhaps previously more manageable risks? E.g. for nuclear risks, with admittedly arbitrary numbers, if each year has a 1% pre-Covid risk of nuclear war, if Covid-related cognitive decline shifted this risk to even something like 1.1% per year, even small risk increases could still be significant for such a strong potential negative. 

Counterpoints

As potential counterpoints, perhaps Covid-related cognitive decline just isn't that serious, but with perhaps hidden long-term consequences of many multiple infections not yet showing significantly and there also being a 60% increased risk of developing a new mental illness after infection, perhaps both raw intelligence decline, combined with mental health shifts, is worth considering? 

However, perhaps population cognitive decline doesn't have enough of an effect on decision-making to be significant, or there are genuinely significant cognitive declines among key decision-makers, but these are being counterbalanced by other organisational, health and tech improvements?

Future considerations

Finally, if Covid-related decline is a serious possibility over repeat, even seemingly mild, infections, might it even be helpful, most other things being equal, to draw key decision-makers and policy specialists disproportionately from among those who have had fewer infections, or those who appear to be genetically resistant to even first infections?

My intuition is that, something just feels wrong or missing from this line of reasoning, but with AI regulation and alignment perhaps already being poorly managed by governments, could our efforts to avert a larger, existential crisis still be hampered by the lingering effects of our last global crisis?

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 4m read
 · 
Earlier this year, we launched a request for proposals (RFP) from organizations that fundraise for highly cost-effective charities. The Livelihood Impact Fund supported the RFP, as did two donors from Meta Charity Funders. We’re excited to share the results: $1,565,333 in grants to 11 organizations. We estimate a weighted average ROI of ~4.3x across the portfolio, which means we expect our grantees to raise more than $6 million in adjusted funding over the next 1-2 years.   Who’s receiving funding These organizations span different regions, donor audiences, and outreach strategies. Here’s a quick overview: Charity Navigator (United States) — $200,000 Charity Navigator recently acquired Causeway, through which they now recommend charities with a greater emphasis on impact across a portfolio of cause areas. This grant supports Causeway’s growth and refinement, with the aim of nudging donors toward curated higher-impact giving funds. Effectief Geven (Belgium) — $108,000 Newly incubated, with solid early traction and plans to expand donor reach. This grant will help them expand from 1 to 1.5 FTE. Effective Altruism Australia (Australia) — $257,000 A well-established organization with historically strong ROI. This grant supports the hiring of a dedicated director for their effective giving work, along with shared ops staff, over two years. Effective Altruism New Zealand (New Zealand) — $17,500 A long-standing, low-cost organization with one FTE and a consistently great ROI. This grant covers their core operating expenses for one year, helping to maintain effective giving efforts in New Zealand. Etkili Bağış (Turkey) — $20,000 A new initiative piloting effective giving outreach in Turkey. This grant helps professionalize their work by covering setup costs and the executive director’s time for one year. Giv Effektivt (Denmark) — $210,000 A growing national platform that transitioned from volunteer-run to staffed, with strong early ROI and healthy signs of growth.