This is a special post for quick takes by technicalities. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Inspired by Jaime's charming rundown of his quarterly(!) output, I'll put something up:

In 2021, I

  • led a study of mask-wearing for COVID at the most zoomed-out level (unpicking heavily confounded national epidemiology stats). This was one of the hardest things I've ever done for a few reasons: my first big Bayesian model, my first big journal paper, the worst peer review I've ever seen, the incredibly poor data, taking on a field I've never taken a class in, a mob of hooting trolls on Twitter.
  • This led to me advising the British government on winter covid policy wtf.
  • recovered from 5 months of that by coming out of pandemic mode. I travelled to Estonia, Czechia, Stonehenge, Iceland, and did my first ever trip to the east coast of America. Saw my family for the first time in 2 years.
  • won an Emergent Ventures grant despite my application being fairly deranged
  • got into a conference, my first AI safety paper (a negative result)
  • won a cybercrime hackathon run with the Dutch Serious Crime Unit
  • taught at two amazing maths camps for teenagers. This was probably the single best thing all year.
  • a blogpost from last year blew up and earned me three job offers (Roam, Neuro, CEA?) and an invite to write for Nature. Some people actually in the field adopted and expanded it.
  • started an EA consultancy, Arb, with a friend. We got three big contracts, and have finished 4 subprojects so far, watch this space.
  • got rejected for an Amazon Research Internship within 4 hours
  • got rejected for the Vitalik AI Safety Fellowship, no reason given.
  • got rejected for the GovAI Summer Fellowship. No reason given, but it might be because my proposal was a little edgy: "Mediocre AI Safety As Existential Risk".
  • couldn't find a venue for our seasonality paper somehow
  • got my first EA grant, to help with executive dysfunction in EA students
  • made a bunch of friends and was adopted as an Irish neoliberal(?)
  • quit caffeine and booze entirely (from low levels)
  • did a bunch of reviews for the AI Safety Camp. The standard is pretty intense now
  • tried vyvanse and wellbutrin
  • turned off all morning alarms and wake whenever
  • finally got some crypto and ended up 10x in 5 months
  • got a laptop for ~free because Lenovo's website was broken
  • Currently doing 3 months at the FTX Bahamas thing and have suspended my PhD. It is pretty amazing.

This seems overwhelmingly awesome, congrats and I hope you are doing great in the Bahamas.

As a small point, and a sincere question,  I'm curious about the "personal framework" or beliefs that led you to stop consuming even low levels of caffeine and alchohol, but at the same time, start or try using the medications you indicated. 

I'm curious because some people I met, who foreswear alcohol and caffeine, would also oppose the personal use of many medications too.

To be clear, I find any combination of abstinence/use of any of those 4 things fine (and not my business unless openly discussed).

Thanks Charles!

My reasoning about caffeine is here. For common genomes, I expect it to have no chronic cognitive benefit and to harm sleep quality for basically no gain. I think I'm one of those genomes. Nor do I get the pleasure or motivation others seem to. (The same reasoning probably applies to all stimulants.) Might get into fancy loose-leaf tea one day, but just for fun.

No particular reasoning about booze. Certainly not puritanism. The alleged health benefits fell apart (or rather the credibility of the field studying it did), I don't much like it, and luckily my social life doesn't need the help.

When reading up for Off Road I started to wonder if maybe I am mildly ADHD myself. I opted for the House MD method of diagnosis: suck it and see.

I should mention that some clever friends of mine try "stimulant cycling" instead of quitting caffeine entirely. This might avoid the downregulation trap.

Wow, sounds like an amazing year!

What's the standard for AI Safety Camp these days?

I should have said "median" (supply-side: participants just being really good) rather than "standard" (our setting a high bar).

Bunch of ML PhD students and people whose writing I seriously admired before they applied.

This year is interesting cos we tried hard to get non-ML people to join. We've got a pro Continental philosopher coming for instance!

Looks like we have a cost-saving way to prevent 7 billion male chick cullings a year.

I snipe at accelerationist anti-welfarists in the thread, but it's an empirical question whether removing horrifying parts of the horrifying system ends up delaying abolition and being net-harmful. It seems extremely unlikely (and assumes that one-shot abolition is possible) but I haven't modelled it.

So happy to see this new longtermist fellowship running in Kenya.

I liked your post! But I don't find the claim that Ramsey was the first "explicit" longtermist very plausible

. The quote about discounting being "ethically indefensible and arises merely from the weakness of the imagination" echoes points made earlier by other economists, e.g. Pigou:

Generally speaking, everybody prefers present pleasures or satisfactions of given magnitude to future pleasures or satisfactions of equal magnitude, even when the latter are perfectly certain to occur. But this preference for present pleasures does not -- the idea is self-contradictory -- imply that a present pleasure of given magnitude is any greater than a future pleasure of the same magnitude. It implies only that our telescopic faculty is defective, and that we, therefore, see future pleasures, as it were, on a diminished scale

This is from The Economics of Welfare, published when Ramsey was a teenager, and eight years before the essay in which the quote appears.

I was very unclear about what justifies that claim, pardon: 

Ramsey deriving the form of the intertemporal decision and then setting  seems much clearer than Pigou (or Sidgwick, who waved in the direction of the position much earlier than either). 

"First quantitative longtermist"? "First strong longtermist"?

Ah, right. Yes, regardless of what we call him, this is undoubtedly a significant milestone in the historical development of longtermism. (I'm not personally comfortable with calling Ramsey or anyone else the "first" [qualification] longtermist because I think longtermism involves multiple claims, not just an endorsement of a zero discount rate, although that claim is clearly a central one.)

I'd love to see more posts exploring early longtermist or proto-longtermist thinking!

it is good to omit doing what might perhaps bring some profit to the living, when we have in view the accomplishment of other ends that will be of much greater advantage to posterity.

 

- Descartes (1637)

On AI quietism. Distinguish four things:

  1. Not believing in AGI takeover.
  2. Not believing that AGI takeover is near. (Ng)
  3. Believing in AGI takeover, but thinking it'll be fine for humans. (Schmidhuber)
  4. Believing that AGI will extinguish humanity, but this is fine. 
    1. because the new thing is superior (maybe by definition, if it outcompetes us). 
    2. because scientific discovery is the main thing

(4) is not a rational lack of concern about an uncertain or far-off risk: it's lack of caring, conditional on the risk being real.

Can there really be anyone in category (4) ?

  • Sutton: we could choose option (b) [acquiescence] and not have to worry about all that. What might happen then? We may still be of some value and live on. Or we may be useless and in the way, and go extinct. One big fear is that strong AIs will escape our control; this is likely, but not to be feared... ordinary humans will eventually be of little importance, perhaps extinct, if that is as it should be.
     
  • Hinton: "the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.” As the scientists retreated to tables set up for refreshments, I asked Hinton if he believed an A.I. could be controlled. “That is like asking if a child can control his parents,” he said. “It can happen with a baby and a mother—there is biological hardwiring—but there is not a good track record of less intelligent things controlling things of greater intelligence.”


I expect this cope to become more common over the next few years.

(4) was definitely the story with Ben Goertzen and his "Cosmism". I expect some "a/acc" libertarian types will also go for it. But it is and will stay pretty fringe imo.

The ladder of EA weirdness

  1. Obligation to the global poor

  2. Obligation to farmed nonhumans

  3. Obligation to wild nonhumans

...

n. Obligation to potential humans and nonhumans

...

m. Obligation to take psychedelics / dissolve the self

o. Obligation to electrons

...

p. Obligation to acausally trade with those outside the light cone

q. Obligation to acausally trade with those elsewhere in the multiverse

r. Obligation to entities somewhere inside the universal prior

There is a vast amount of philosophical progress. But almost all of it is outside philosophy. Jaw-dropping list, just on the topic of democracy; things that Rousseau writing on democracy suffers from lacking:

  • "Historical experiences with developed democracies
  • Empirical evidence regarding democratic movements in developing countries
  • Various formal theorems regarding collective decision making and preference aggregation, such as the Condorcet Jury-Theorem, Arrow’s Impossibility-Results, the Hong-Page-Theorem, the median voter theorem, the miracle of aggregation, etc.
  • Existing studies on voter behavior, polarization, deliberation, information
  • Public choice economics, incl. rational irrationality, democratic realism"
  • ...

 

https://www.tandfonline.com/doi/full/10.1080/0020174X.2022.2124542

Great epigraph!

Review of the New Yorker piece. It's a model of its type, for good and ill but mostly good. 

The good: The essence is correct. EA is now powerful enough that public scrutiny is fully justified. Lewis-Kraus engages with the ideas, and skips tabloid cheap shots. (The house style always involves little gossipy comments about fashion and eye colour, but here it's more about scruffy clothing than physical appearance). 

For instance, it's extremely easy to caricature utilitarianism. Certainly many professional philosophers do. But Lewis-Kraus chooses the neutral definition: no cavilling about hedonism, reductionism, Gradgrind, nor very much about honor. Similarly, AI risk is oddly underemphasised, and we all know how easy that is to piss on. 

The hypothesis of MacAskill's bad faith is entertained and rejected. So too with Bernard Williams' quietism: looked at and put back on the shelf. "perhaps one thought too few".

The bad: gossip and false balance. Girlfriends and buildings are named, needlessly, privacy and risk be damned. The dissident's gender is revealed for absolutely no reason. Journalists as a class have an underdeveloped sense of the risks they are exposing people to. The house style demands irrelevant detail, and apparently places style above potential impacts.

I can't help but admire the symbols he picks out of real life, even though they are the nonfiction equivalent of puns or entrail reading:

* Of xrisk research: "an Oxford building that overlooks a graveyard."

* "The room featured a series of ornately carved wooden clocks, all of which displayed contrary times; an apologetic sign read “Clocks undergoing maintenance,” but it was an odd portent for a talk about the future"

* "We passed People’s Park, which had become a tent city, but his eyes flicked toward the horizon."


Some risible bits:


> abandon the world view of the “benevolent capitalist” and, just as Engels worked in a mill to support Marx, to live up to its more thoroughgoing possibilities

Incredible. Engels ran a Manchester cotton mill and inherited a fifth of it; he was a benevolent capitalist!


> the chances of human extinction during the next century stand at about 1–6, or the odds of Russian roulette

That's not how odds work


> It does, in any case, seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists

jfc. If you worry that practitioners of a field are ignoring something, you're a crank and a trespasser. If you worry about the tail risks of your own field, you're suffering from convenient delusions of grandiosity.

The PR suspicion is funny ("Was MacAskill’s gambit with me—the wild swimming in the frigid lake—merely a calculation that it was best to start things off with a showy abdication of the calculus?"). GLK didn't mention any of this in his profile of Rothberg, a businessman with incentives and a presumably similarly sized filter on his speech. But mention consequentialism and suddenly everyone assumes you're a master at acting and a 4D chess player. But he was just primed for it by the dissident so nvm.


> I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.

Literally backwards. I find it much more emotionally difficult to contemplate x-risk than terrible but limited events.


But overall GLK is the real deal, as good as magazine writers get. See also him on Paige Harden and Scott Alexander.

TIL I learned about the Utilitarian Fandom.

(Derives from old Felicifia, and so I guess Pablo wrote a lot of it.)

Several absurd things about this video, but we could learn a lot about delivery from it.

I want to save the world and - you know, money - money's great! I can't get enough money. And you know what i'm going to do with it? I'm going to buy wilderness areas with it! 

Every single cent I get goes straight into conservation. And guess what Charles: I don't give a rip whose money it is mate. I'll use it and i'll spend it on buying land.

Passion can make even bullet-biting instrumental harm sound noble and humane. 

(Obviously this is a symmetric weapon.)

Ben Franklin's diary included the daily exhortation to rise and work some "Powerful Goodness". Better name than Effective Altruism tbf.

Love this!

yeh i never like the name 'effective altruism'

Thread for serious AI safety researchers who aren't longtermists

Gabriel 

Shoker

"Effective Accelerationism"

(Kent Brockman: I for one welcome our Vile Offspring.)

PlumX is an academic web analytics service, looking at how papers are shared. It's mostly not very good, but they recently added Overton, which specifically scrapes the occasions a paper is cited in policy documents. This seems important!

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier