This is a special post for quick takes by Gavin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Inspired by Jaime's charming rundown of his quarterly(!) output, I'll put something up:

In 2021, I

  • led a study of mask-wearing for COVID at the most zoomed-out level (unpicking heavily confounded national epidemiology stats). This was one of the hardest things I've ever done for a few reasons: my first big Bayesian model, my first big journal paper, the worst peer review I've ever seen, the incredibly poor data, taking on a field I've never taken a class in, a mob of hooting trolls on Twitter.
  • This led to me advising the British government on winter covid policy wtf.
  • recovered from 5 months of that by coming out of pandemic mode. I travelled to Estonia, Czechia, Stonehenge, Iceland, and did my first ever trip to the east coast of America. Saw my family for the first time in 2 years.
  • won an Emergent Ventures grant despite my application being fairly deranged
  • got into a conference, my first AI safety paper (a negative result)
  • won a cybercrime hackathon run with the Dutch Serious Crime Unit
  • taught at two amazing maths camps for teenagers. This was probably the single best thing all year.
  • a blogpost from last year blew up and earned me three job offers (Roam, Neuro, CEA?) and an invite to write for Nature. Some people actually in the field adopted and expanded it.
  • started an EA consultancy, Arb, with a friend. We got three big contracts, and have finished 4 subprojects so far, watch this space.
  • got rejected for an Amazon Research Internship within 4 hours
  • got rejected for the Vitalik AI Safety Fellowship, no reason given.
  • got rejected for the GovAI Summer Fellowship. No reason given, but it might be because my proposal was a little edgy: "Mediocre AI Safety As Existential Risk".
  • couldn't find a venue for our seasonality paper somehow
  • got my first EA grant, to help with executive dysfunction in EA students
  • made a bunch of friends and was adopted as an Irish neoliberal(?)
  • quit caffeine and booze entirely (from low levels)
  • did a bunch of reviews for the AI Safety Camp. The standard is pretty intense now
  • tried vyvanse and wellbutrin
  • turned off all morning alarms and wake whenever
  • finally got some crypto and ended up 10x in 5 months
  • got a laptop for ~free because Lenovo's website was broken
  • Currently doing 3 months at the FTX Bahamas thing and have suspended my PhD. It is pretty amazing.

This seems overwhelmingly awesome, congrats and I hope you are doing great in the Bahamas.

As a small point, and a sincere question,  I'm curious about the "personal framework" or beliefs that led you to stop consuming even low levels of caffeine and alchohol, but at the same time, start or try using the medications you indicated. 

I'm curious because some people I met, who foreswear alcohol and caffeine, would also oppose the personal use of many medications too.

To be clear, I find any combination of abstinence/use of any of those 4 things fine (and not my business unless openly discussed).

Thanks Charles!

My reasoning about caffeine is here. For common genomes, I expect it to have no chronic cognitive benefit and to harm sleep quality for basically no gain. I think I'm one of those genomes. Nor do I get the pleasure or motivation others seem to. (The same reasoning probably applies to all stimulants.) Might get into fancy loose-leaf tea one day, but just for fun.

No particular reasoning about booze. Certainly not puritanism. The alleged health benefits fell apart (or rather the credibility of the field studying it did), I don't much like it, and luckily my social life doesn't need the help.

When reading up for Off Road I started to wonder if maybe I am mildly ADHD myself. I opted for the House MD method of diagnosis: suck it and see.

I should mention that some clever friends of mine try "stimulant cycling" instead of quitting caffeine entirely. This might avoid the downregulation trap.

Wow, sounds like an amazing year!

What's the standard for AI Safety Camp these days?

I should have said "median" (supply-side: participants just being really good) rather than "standard" (our setting a high bar).

Bunch of ML PhD students and people whose writing I seriously admired before they applied.

This year is interesting cos we tried hard to get non-ML people to join. We've got a pro Continental philosopher coming for instance!

Looks like we have a cost-saving way to prevent 7 billion male chick cullings a year.

I snipe at accelerationist anti-welfarists in the thread, but it's an empirical question whether removing horrifying parts of the horrifying system ends up delaying abolition and being net-harmful. It seems extremely unlikely (and assumes that one-shot abolition is possible) but I haven't modelled it.

So happy to see this new longtermist fellowship running in Kenya.

I liked your post! But I don't find the claim that Ramsey was the first "explicit" longtermist very plausible

. The quote about discounting being "ethically indefensible and arises merely from the weakness of the imagination" echoes points made earlier by other economists, e.g. Pigou:

Generally speaking, everybody prefers present pleasures or satisfactions of given magnitude to future pleasures or satisfactions of equal magnitude, even when the latter are perfectly certain to occur. But this preference for present pleasures does not -- the idea is self-contradictory -- imply that a present pleasure of given magnitude is any greater than a future pleasure of the same magnitude. It implies only that our telescopic faculty is defective, and that we, therefore, see future pleasures, as it were, on a diminished scale

This is from The Economics of Welfare, published when Ramsey was a teenager, and eight years before the essay in which the quote appears.

I was very unclear about what justifies that claim, pardon: 

Ramsey deriving the form of the intertemporal decision and then setting  seems much clearer than Pigou (or Sidgwick, who waved in the direction of the position much earlier than either). 

"First quantitative longtermist"? "First strong longtermist"?

Ah, right. Yes, regardless of what we call him, this is undoubtedly a significant milestone in the historical development of longtermism. (I'm not personally comfortable with calling Ramsey or anyone else the "first" [qualification] longtermist because I think longtermism involves multiple claims, not just an endorsement of a zero discount rate, although that claim is clearly a central one.)

I'd love to see more posts exploring early longtermist or proto-longtermist thinking!

it is good to omit doing what might perhaps bring some profit to the living, when we have in view the accomplishment of other ends that will be of much greater advantage to posterity.


- Descartes (1637)

The ladder of EA weirdness

  1. Obligation to the global poor

  2. Obligation to farmed nonhumans

  3. Obligation to wild nonhumans


n. Obligation to potential humans and nonhumans


m. Obligation to take psychedelics / dissolve the self

o. Obligation to electrons


p. Obligation to acausally trade with those outside the light cone

q. Obligation to acausally trade with those elsewhere in the multiverse

r. Obligation to entities somewhere inside the universal prior

Review of the New Yorker piece. It's a model of its type, for good and ill but mostly good. 

The good: The essence is correct. EA is now powerful enough that public scrutiny is fully justified. Lewis-Kraus engages with the ideas, and skips tabloid cheap shots. (The house style always involves little gossipy comments about fashion and eye colour, but here it's more about scruffy clothing than physical appearance). 

For instance, it's extremely easy to caricature utilitarianism. Certainly many professional philosophers do. But Lewis-Kraus chooses the neutral definition: no cavilling about hedonism, reductionism, Gradgrind, nor very much about honor. Similarly, AI risk is oddly underemphasised, and we all know how easy that is to piss on. 

The hypothesis of MacAskill's bad faith is entertained and rejected. So too with Bernard Williams' quietism: looked at and put back on the shelf. "perhaps one thought too few".

The bad: gossip and false balance. Girlfriends and buildings are named, needlessly, privacy and risk be damned. The dissident's gender is revealed for absolutely no reason. Journalists as a class have an underdeveloped sense of the risks they are exposing people to. The house style demands irrelevant detail, and apparently places style above potential impacts.

I can't help but admire the symbols he picks out of real life, even though they are the nonfiction equivalent of puns or entrail reading:

* Of xrisk research: "an Oxford building that overlooks a graveyard."

* "The room featured a series of ornately carved wooden clocks, all of which displayed contrary times; an apologetic sign read “Clocks undergoing maintenance,” but it was an odd portent for a talk about the future"

* "We passed People’s Park, which had become a tent city, but his eyes flicked toward the horizon."

Some risible bits:

> abandon the world view of the “benevolent capitalist” and, just as Engels worked in a mill to support Marx, to live up to its more thoroughgoing possibilities

Incredible. Engels ran a Manchester cotton mill and inherited a fifth of it; he was a benevolent capitalist!

> the chances of human extinction during the next century stand at about 1–6, or the odds of Russian roulette

That's not how odds work

> It does, in any case, seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists

jfc. If you worry that practitioners of a field are ignoring something, you're a crank and a trespasser. If you worry about the tail risks of your own field, you're suffering from convenient delusions of grandiosity.

The PR suspicion is funny ("Was MacAskill’s gambit with me—the wild swimming in the frigid lake—merely a calculation that it was best to start things off with a showy abdication of the calculus?"). GLK didn't mention any of this in his profile of Rothberg, a businessman with incentives and a presumably similarly sized filter on his speech. But mention consequentialism and suddenly everyone assumes you're a master at acting and a 4D chess player. But he was just primed for it by the dissident so nvm.

> I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.

Literally backwards. I find it much more emotionally difficult to contemplate x-risk than terrible but limited events.

But overall GLK is the real deal, as good as magazine writers get. See also him on Paige Harden and Scott Alexander.

On AI quietism. Distinguish four things:

  1. Not believing in AGI takeover.
  2. Not believing that AGI takeover is near. (Ng)
  3. Believing in AGI takeover, but thinking it'll be fine for humans. (Schmidhuber)
  4. Believing that AGI will extinguish humanity, but this is fine. 
    1. because the new thing is superior (maybe by definition, if it outcompetes us). 
    2. because scientific discovery is the main thing

(4) is not a rational lack of concern about an uncertain or far-off risk: it's lack of caring, conditional on the risk being real.

Can there really be anyone in category (4) ?

  • Sutton: we could choose option (b) [acquiescence] and not have to worry about all that. What might happen then? We may still be of some value and live on. Or we may be useless and in the way, and go extinct. One big fear is that strong AIs will escape our control; this is likely, but not to be feared... ordinary humans will eventually be of little importance, perhaps extinct, if that is as it should be.
  • Hinton: "the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” As the scientists retreated to tables set up for refreshments, I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.”

I expect this cope to become more common over the next few years.

(4) was definitely the story with Ben Goertzen and his "Cosmism". I expect some "a/acc" libertarian types will also go for it. But it is and will stay pretty fringe imo.

TIL I learned about the Utilitarian Fandom.

(Derives from old Felicifia, and so I guess Pablo wrote a lot of it.)

There is a vast amount of philosophical progress. But almost all of it is outside philosophy. Jaw-dropping list, just on the topic of democracy; things that Rousseau writing on democracy suffers from lacking:

  • "Historical experiences with developed democracies
  • Empirical evidence regarding democratic movements in developing countries
  • Various formal theorems regarding collective decision making and preference aggregation, such as the Condorcet Jury-Theorem, Arrow’s Impossibility-Results, the Hong-Page-Theorem, the median voter theorem, the miracle of aggregation, etc.
  • Existing studies on voter behavior, polarization, deliberation, information
  • Public choice economics, incl. rational irrationality, democratic realism"
  • ...

Great epigraph!

Several absurd things about this video, but we could learn a lot about delivery from it.

I want to save the world and - you know, money - money's great! I can't get enough money. And you know what i'm going to do with it? I'm going to buy wilderness areas with it! 

Every single cent I get goes straight into conservation. And guess what Charles: I don't give a rip whose money it is mate. I'll use it and i'll spend it on buying land.

Passion can make even bullet-biting instrumental harm sound noble and humane. 

(Obviously this is a symmetric weapon.)

Ben Franklin's diary included the daily exhortation to rise and work some "Powerful Goodness". Better name than Effective Altruism tbf.

yeh i never like the name 'effective altruism'

Thread for serious AI safety researchers who aren't longtermists



"Effective Accelerationism"

(Kent Brockman: I for one welcome our Vile Offspring.)

PlumX is an academic web analytics service, looking at how papers are shared. It's mostly not very good, but they recently added Overton, which specifically scrapes the occasions a paper is cited in policy documents. This seems important!

More from Gavin
Curated and popular this week
Relevant opportunities