peterhartree

3476 karmaJoined Working (6-15 years)Reykjavik, Islande
pjh.is

Bio

Now: TYPE III AUDIO

Previously: 80,000 Hours (2014-15; 2017-2021) Worked on web development, product management, strategy, internal systems, IT security, etc. 

Before that: My CV.

Side-projects: Inbox When Ready; Radio Bostrom; The Valmy; Comment Helper for Google Docs.

Comments
277

Topic contributions
4

I also don't see any evidence for the claim of EA philosophers having "eroded the boundary between this kind of philosophizing and real-world decision-making".

Have you visited the 80,000 Hours website recently?

I think that effective altruism centrally involves taking the ideas of philosophers and using them to inform real-world decision-making. I am very glad we’re attempting this, but we must recognise that this is an extraordinarily risky business. Even the wisest humans are unqualified for this role. Many of our attempts are 51:49 bets at best—sometimes worth trying, rarely without grave downside risk, never without an accompanying imperative to listen carefully for feedback from the world. And yes—diverse, hedged experiments in overconfidence also make sense. And no, SBF was not hedged anything like enough to take his 51:49 bets—to the point of blameworthy, perhaps criminal negligence.

A notable exception to the “we’re mostly clueless” situation is: catastrophes are bad. This view passes the “common sense” test, and the “nearly all the reasonable takes on moral philosophy” test too (negative utilitarianism is the notable exception). But our global resource allocation mechanisms are not taking “catastrophes are bad” seriously enough. So, EA—along with other groups and individuals—has a role to play in pushing sensible measures to reduce catastrophic risks up the agenda (as well as the sensible disaster mitigation prep).

(Derek Parfit’s “extinction is much worse than 99.9% wipeout” claim is far more questionable—I put some of my chips on this, but not the majority.)

As you suggest, the transform function from “abstract philosophical idea” to “what do” is complicated and messy, and involves a lot of deference to existing norms and customs. Sadly, I think that many people with a “physics and philosophy” sensibility underrate just how complicated and messy the transform function really has to be. So they sometimes make bad decisions on principle instead of good decisions grounded in messy common sense.

I’m glad you shared the J.S. Mill quote.

…the beliefs which have thus come down are the rules of morality for the multitude, and for the philosopher until he has succeeded in finding better

EAs should not be encouraged to grant themselves practical exception from “the rules of morality for the multitude” if they think of themselves as philosophers. Genius, wise philosophers are extremely rare (cold take: Parfit wasn’t one of them).

To be clear: I am strongly in favour of attempts to act on important insights from philosophy. I just think that this is hard to do well. One reason is that there is a notable minority of “physics and philosophy” folks who should not be made kings, because their “need for systematisation” is so dominant as to be a disastrous impediment for that role.

In my other comment, I shared links to Karnofsky, Beckstead and Cowen expressing views in the spirit of the above. From memory, Carl Shuman is in a similar place, and so are Alexander Berger and Ajeya Cotra.

My impression is that more than half of the most influential people in effective altruism are roughly where they should be on these topics, but some of the top “influencers”, and many of the ”second tier”, are not.

(Views my own. Sword meme credit: the artist currently known as John Stewart Chill.)

I've no experience writing questions for prediction markets. With that caveat: something like that question sounds good.

Ideally I'd like to see the 1-year analysis run in 2026Q1.

Notably, in that video, Garry is quite careful and deliberate with his phrasing. It doesn't come across as a case of him doing excited offhand hype. Paul Buchheit nods as he makes the claim.

Cool, thanks. With that source, I agree it's correct to say that Garry Tan has claimed that "YC batches are the fastest growing in their history because of generative AI" for the summer 2024, autumn 2024 and winter 2025 batches.

Have you noticed him making a similar claim for earlier batches?

Thanks for the post. Of your caveats, I'd guess 4(d) is the most important:

Generative AI is very rapidly progressing. It seems plausible that technologies good enough to move the needle on company valuations were only developed, say, six months ago, in which case it would be too early to see any results.

Personally, things have felt very different since o3 (April) and, for coding, the Claude 4 series (May).

Anthropic's run-rate revenue went from $1B in January to $5B in August.

This post misquotes Garry Tan. You wrote (my emphasis):

Y Combinator CEO Garry Tan has publicly stated that recent YC batches are the fastest growing in their history because of generative AI.

But Garry's claim was only about the winter 2025 batch. From the passage you cited:

The winter 2025 batch of YC companies in aggregate grew 10% per week, he said.

“It’s not just the number one or two companies -- the whole batch is growing 10% week on week”

Thanks for reporting this.

I've just fixed the narration.

A long overdue thank you for this comment.

I looked into this, and there is in fact some evidence that less expressive voices are easier to understand at high speed. This factor influenced our decision to stick with Ryan for now.

The results of the listener survey were equivocal. Listener preferences varied widely, with Ryan (our existing voice) and Echo coming out tied and slightly on top, but not with statistical significance. [1]

Given that, we plan to stick with Ryan for now. Two considerations that influence this call, independently of the survey result:

  1. There's an habituation effect, such that switching costs for existing hardcore listeners are significant.
  2. There's some evidence that more expressive voices are less comprehensible at high listening speeds. Ryan is less expressive than the other voices we tested.

We'll survey again—or perhaps just switch based on our judgement—when better models are released.



[1] Above a basic quality threshhold, people's preferences for human narrators also seem to vary wildly. We've found that the same human narrators elicit feedback that can be characterised as "love letters" and "hate mail"—in surprisingly similar proportions.

Load more