TH

Tobias Häberli

@ Pivotal Research
2078 karmaJoined Bern, Switzerland

Comments
151

Topic contributions
1

Good points, thank you!

They have incredibly short AGI timelines, so per their own views, they can't afford to move slowly. If they are giving less than 5% of assets after they already claim AGI, that's a huge failure.

Do we know whether this is true for the OAF board?[1] Sam Altman is on it, and he definitely believes something along these lines but it's less clear for the others. Here's a ChatGPT and a Claude answer on this, which points towards the others being less bullish & concerned (but also a lack of information about what they believe). I expect there to be a range of views on timelines & transformativeness of AGI among the board members – which probably makes it more likely that their spending targets are compatible with the foundations mission.

  1. ^

    Bret Taylor (Chair), Adam D’Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, Nicole Seligman, Sam Altman

It looks much nicer than the original imo. If I didn't have context, I'd probably be confused though.

Why 80,000 hours? And what is the pie chart / watch face analogy about? On first glance I’m not sure whether it’s about career choice, time management, life balance, or some '5pm' metaphor.

I looked at it in this order: (1) “80,000 hours”, (2) pie chart / watch face, trying to figure it out, (3) subtitle, (4) endorsement. But the subtitle and endorsement are doing most of the work of telling me what the book is actually about and whether it’s for me.

Maybe some of this is intended, to make people pick up the book and try to find answers. :) 

I agree it would be bad if the OpenAI Foundation were still giving under 5% per year several years from now. But I don’t think 'they should spend 5%+ in year one' follows.

Directing billions well is really hard, especially for a new foundation. Coefficient Giving says it directed over $4 billion from 2014 to mid-2025, and that 2025 was the first year it directed more than $1 billion. Their 'endowment' is much smaller (~10x smaller?) than OAF’s but it still points towards allocating money well at that scale being genuinely hard. I wouldn't call a new foundation planning to deploy $1 billion in its first year "conservative".

What I'd most like to see is OAF committing to an aggressive, public ramp-up targets, maybe something like reaching 5% of assets by 2028.

No, sorry. The diamond emoji (🔸) is specifically for people who donate 10% of their earnings. 

But taking a 50% pay cut for altruistic reasons is incredibly based, so you should use the square emoji instead (🟧). It's also larger, which seems fitting.

Thanks, that's useful. I mostly agree with you, and mistakenly read the second bullet point as saying "work that opposes fascism should come from all sides of the political spectrum", which is something I agree with. I think the OP somewhat assumed that opposing fascism will look like 'work with your local anti-fascist network', but I expect much of it could look more like 'militarising Europe' (something the political left would typically oppose).

I'm curious to understand better where people disagree with this comment.

I don't think this quite works as a response to Alene's point. Many things are necessary/valuable preconditions for doing good. We need food, water, functioning infrastructure, preserving democracy, the internet, etc. The fact that something is a precondition for other work doesn't by itself make it a high-priority EA cause area.

If I apply the ITN framework to 'preserving democracy', I get something like:

  • Importance: Not losing democracy is very important. But losing it was arguably similarly catastrophic e.g. 10 years ago. The question is how much the probability has actually increased. Even though the probability seems larger right now, I expect it to still be relatively small – but I'm uncertain.
  • Neglectedness: Very low. I agree with Alene's core point that it's one of the least neglected causes right now.
  • Tractability: I'd argue somewhat low, though I'm highly uncertain. There's little reason to believe there's lots of low-hanging fruit that hasn't been picked over decades and centuries of interest in making democracies stable.
    • It's also worth noting that much of the current concern is specifically about US democracy, which matters a lot (largest economy, major influence on the rest of the world, where AI is mostly going to be built), and tractability is currently plausibly higher but that's a narrower cause than 'preserving democracy' (e.g. by reducing global democratic backsliding) full stop.

Thanks for this post – really would have liked having such a filter in the past.

We estimate that The Vegan Filter could cut the convenience barrier roughly in half by addressing the “supermarket barrier,” one of the largest friction points for new vegans.

Can you say more about why you estimate this to half the convenience barrier? 
I expect this to be much lower, maybe cutting the inconvenience of being vegan by 1-5%. The filter could still be worth the effort, of course :) 

which I don't think Veganuary is.

Seems true. Looking at google trends, 'veganuary' is a lot less searched for than 'movember'. 

And I'd suspect that 'movember' isn't all that well-known either. For example, comparing it to black history month.

you are threatening not to care about a problem in the world because I made you uncomfortable

Is this directed at me? Because I didn't want to do this, and I don't see why you think I did this (like, I clearly never threatened not to care about a problem?).

If I take the way that you've used "you" in your post and in the comments here seriously, you've said a bunch of things that I believe are clearly not true:

you want me to beg you to please consider it as a favor [I don't want to do this]

 

I know your arguments in and out. [we've never talked about this together]

 

you don’t care about finding out what is right [I actually do]

 

Now it’s about working at an AI lab or wishing you could work at an AI lab. [I don't wish to do that]

 

I’m already beating you and you just define the game so that the conclusion of moving toward advocacy can’t win. [we've never played any games]

 

you’re tedious to deal with [this one is true, but this is incidental, not sure why you know this]

Load more