This is a special post for quick takes by tobyj. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I have now turned this diagram into an angsty blog post. Enjoy!

Pareto priority problems

I am really into writing at the moment and I’m keen to co-author forum posts with people who have similar interests.

I wrote a few brief summaries of things I'm interested in writing about (but very open to other ideas). 

Also very open to:

  • co-authoring posts where we disagree on an issue and try to create a steely version of the two sides!
  • being told that the thing I want to write has already been written by someone else

Things I would love to find a collaborator to co-write:

  • Comparing the Civil Service bureaucracy to the EA nebuleaucracy.
    • I recently took a break from the Civil Service and to work on an EA project full time. It’s much better, less bureaucratic and less hierarchical. There are still plenty of complex hierarchical structures in EA though. Some of these are explicit (e.g. the management chain of an EA org or funder/fundee relationships), but most aren’t as clear. I think the current illegibility of EA power structures is likely fairly harmful and want more consideration of solutions (that increase legibility).
    • Semi-related thing I’ve already written: 11 mental models of bureaucracies
  • What is the relationship between moral realism, obligation-mindset, and guilt/shame/burnout?
    • Despite no longer buying moral realism philosophically, I deeply feel like there is an objective right and wrong. I used to buy moral realism and used this feeling of moral judgement to motivate myself a lot. I had a very bad time. 
    • People who reject moral realism philosophically (including me) still seem to be motivated by other, often more wholesome moral feelings, including towards EA-informed goals.
    • Related thing I’ve already written: How I’m trying to be a less "good" person.
  • Prioritisation panic +  map territory terror
    • These seem like  the main EA-neuroses - the fears that drive many of us. 
    • I constantly feel like I’m sifting for gold in a stream, while there might be gold mines all around me. If I could just think a little harder, or learn faster, I could find them…
    • The distribution in value  of different possible options is huge, and prioritisation seems to work. But you have to start doing things at some point. The fear that I’m working on the wrong thing is painful and constant and the reason I am here…
    • As with prioritisation, the fear that your beliefs are wrong is everywhere and is pretty all-consuming. False beliefs are deeply dangerous personally and catastrophic for helping others. I feel I really need to be obsessed with this.
    • I want to explore more feeling-focussed solutions to these fears.
  • When is it better to risk being too naive, or too cynical
    • Is the world super dog-eat-dog or are people mostly good? I’ve seen people all over the cynicism spectrum in EA. Going too far either way has its costs, but altruists might want to risk being too naive (and paying a personal cost) rather than too cynical (which had greater external cost).
    • To put this another way. If you are unsure how harsh the world is, lean toward acting like you’re living in a less harsh world - there is more value for EA to take there. (I could do with doing some explicit modelling on this one)
    • This is kinda the opposite of the precautionary principle that drives x-risk work - so is clearly very context specific.
    • Related thing I’ve already written: How honest should you be about your cynicism? 

Re: cynicism, you might enjoy Hufflepuff cynicism.

I'd be interest to read a post you write regarding illegibility of EA power structures. In my head I roughly view this as sticking to personal networks and resisting professionalism/standardization. In a certain sense, I want to see systems/organizations modernize.

A quote from David Graeber's book, The Utopia of Rules, seems vaguely related: "The rise of the modern corporation, in the late nineteenth century, was largely seen at the time as a matter of applying modern, bureaucratic techniques to the private sector—and these techniques were assumed to be required, when operating on a large scale, because they were more efficient than the networks of personal or informal connections that had dominated a world of small family firms."

When is it better to risk being too naive, or too cynical

That reminds me of what I read about game theory in Give and Take by Adam Grant (iirc). The conclusion was that the strategy which results in most rewards was to behave cooperatively and only switch (to non-coop) once every three times if the other is uncooperative. The reasoning was that if you don't cooperate, the "selfish" won't either. But if you "forgive" and try to cooperate again after they weren't cooperative, you may sway them to cooperate too. You don't cooperate always regardless, at risk of being too naive and taken advantage of, but you lean towards cooperating more often than not.

If you are unsure how harsh the world is, lean toward acting like you’re living in a less harsh world - there is more value for EA to take there.

I'd be interested in reading more about this. I think a less cynical view would elicit more cooperation and goodwill due to likeability. I'm not sure this is the direction you're going so that's why I'm curious about it.

I wanted to get some perspective on my life so I wrote my own obituary (in a few different ways).

They ended up being focussed my relationship with ambition. The first is below and may feel relatable to some here!

Auto-obituary attempt one:

Thesis title: “The impact of the life of Toby Jolly”
a simulation study on a human connected to the early 21st century’s “Effective Altruism” movement

Submitted by:
Dxil Sind 0239β
for the degree of Doctor of Pre-Post-humanities
at Sopdet University 
August 2542

Abstract
Many (>500,000,000) papers have been published on the Effective Altruism (EA) movement, its prominent members and their impact on the development of AI and the singularity during the 21st century’s time of perils. However, this is the first study of the life of Toby Jolly; a relatively obscure figure who was connected to the movement for many years. Through analysing the subject’s personal blog posts, self-referential tweets, and career history, I was able to generate a simulation centred on the life and mind of Toby. This simulation was run 100,000,000 times with a variety of parameters and the results were analysed. In the thesis I make the case that Toby Jolly had, through his work, a non-zero, positively-signed impact on the creation of our glorious post-human Emperium (Praise be to Xraglao the Great). My analysis of the simulation data suggests that his impact came via a combination of his junior operations work, and minor policy projects but also his experimental events and self-deprecating writing.

One unusual way he contributed was by consistently trying to draw attention to how his thoughts and actions were so often the product of his own absurd and misplaced sense of grandiosity; a delusion driven by what he would describe himself as a “desperate and insatiable need to matter”. This work marginally increased the self-awareness and psychological flexibility amongst the EA community. This flexibility subsequently improved the movement's ability to handle its minor role in the negotiations needed to broker power during the Grand Transition - thereby helping avoid catastrophe.

The outcomes of our simulations suggest that through his life and work Toby decreased the likelihood of a humanity-ending event by 0.0000000000024%. He is therefore responsible for an expected 18,600,000,000,000,000,000 quality adjusted experience years across the light-cone, before the heat-death of the universe (using typical FLOP standardisation). Toby mattered.

Ethics note: as per standard imperial research requirements, we asked the first 100 simulations of Toby if they were happy being simulated. In all cases, he said “Sure, I actually, kind of suspected it…look, I have this whole blog about it”

See my other auto-obituaries here :)

I wrote up my career review recently! Take a look(also, did you know that Substack doesn't change the URL of a post, even if you rename it?!)

[comment deleted]1
0
0
Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T