Ozzie Gooen

11963 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
1149

Topic contributions
4

It seems like recently (say, the last 20 years) inequality has been rising. (Editing, from comments)

Right now, the top 0.1% of wealthy people in the world are holding on to a very large amount of capital.

(I think this is connected to the fact that certain kinds of inequality have increased in the last several years, but I realize now my specific crossed-out sentence above led to a specific argument about inequality measures that I don't think is very relevant to what I'm interested in here.)

On the whole, it seems like the wealthy donate incredibly little (a median of less than 10% of their wealth), and recently they've been good at keeping their money from getting taxed.

I don't think that people are getting less moral, but I think it should be appreciated just how much power and wealth is in the hands of the ultra wealthy now, and how little of value they are doing with that.

Every so often I discuss this issue on Facebook or other places, and I'm often surprised by how much sympathy people in my network have for these billionaires (not the most altruistic few, but these people on the whole). I suspect that a lot of this comes partially from [experience responding to many mediocre claims from the far-left] and [living in an ecosystem where the wealthy class is able to subtly use their power to gain status from the intellectual class.]

The top 10 known billionaires have easily $1T now. I'd guess that all EA-related donations in the last 10 years have been less than around $10B. (GiveWell says they have helped move $2.4B). 10 years ago, I assumed that as word got out about effective giving, many more rich people would start doing that. At this point it's looking less optimistic. I think the world has quite a bit more wealth, more key problems, and more understanding of how to deal with them then it ever had before, but still this hasn't been enough to make much of a dent in effective donation spending.

At the same time, I think it would be a mistake to assume this area is intractable. While it might not have improved much, in fairness, I think there was little dedicated and smart effort to improve it. I am very familiar with programs like The Giving Pledge and Founders Pledge. While these are positive, I suspect they absorb limited total funding (<$30M/yr, for instance.) They also follow one particular highly-cooperative strategy. I think most people working in this area are in positions where they need to be highly sympathetic to a lot of these people, which means I think that there's a gap of more cynical or confrontational thinking.

I'd be curious to see the exploration of a wide variety of ideas here. 

In theory, if we could move from these people donating say 3% of their wealth, to say 20%, I suspect that could unlock enormous global wins. Dramatically more than anything EA has achieved so far. It doesn't even have to go to particularly effective places - even ineffective efforts could add up, if enough money is thrown at them.

Of course, this would have to be done gracefully. It's easy to imagine a situation where the ultra-wealthy freak out and attack all of EA or similar. I see work to curtail factory farming as very analogous, and expect that a lot of EA work on that issue has broadly taken a sensible approach here. 

From The Economist, on "The return of inheritocracy"

> People in advanced economies stand to inherit around $6trn this year—about 10% of GDP, up from around 5% on average in a selection of rich countries during the middle of the 20th century. As a share of output, annual inheritance flows have doubled in France since the 1960s, and nearly trebled in Germany since the 1970s. Whether a young person can afford to buy a house and live in relative comfort is determined by inherited wealth nearly as much as it is by their own success at work. This shift has alarming economic and social consequences, because it imperils not just the meritocratic ideal, but capitalism itself.

> More wealth means more inheritance for baby-boomers to pass on. And because wealth is far more unequally distributed than income, a new inheritocracy is being born.

 

"Should EA develop any framework for responding to acute crises where traditional cost-effectiveness analysis isn't possible? Or is our position that if we can't measure it with near-certainty, we won't fund it - even during famines?"

This is tricky. I think that most[1] of EA is outside of global health/welfare, and much of this is incredibly speculative. AI safety is pretty wild, and even animal welfare work can be more speculative. 

GiveWell has historically represented much of the EA-aligned global welfare work. They've also seemed to cater to particularly risk-averse donors, from what I can tell. 

So an intervention like this is in a tricky middle-ground, where it's much less speculative than AI risk, but more speculative than much of the GiveWell spend. This is about the point where you can't really think of "EA" as one unified thing with one utility function. The funding works much more as a bunch of different buckets with fairly different criteria.

Bigger-picture, EAs have a very small sliver of philanthropic spending, which itself is a small sliver of global spending. In my preferred world we wouldn't need to be so incredibly ruthless with charity choices, because there would just be much more available. 

[1] In terms of respected EA discussions/researchers.

"Though I think AI is critically important, it is not something I get a real kick out of thinking and hearing about." 

-> Personally, I find a whole lot of non-technical AI content to be highly repetitive. It seems like a lot of the same questions are being discussed again and again with fairly little progress.

For 80k, I think I'd really encourage the team to focus a lot on figuring out new subtopics that are interesting and important. I'm sure there are many great stories out there, but I think it's very easy to get trapped into talking about the routine updates or controversies of the week, with little big-picture understanding. 

Thanks for writing this, and many kudos for your work with USAID. The situation now seems heartbreaking.

I don't represent the major funders. I'd hope that the ones targeting global health would be monitoring situations like these and figuring out if there might be useful and high-efficiency interventions.

Sadly there are many critical problems in the world and there are still many people dying from cheap-to-prevent malaria and similar, so the bar is quite high for these specific pots of funding, but it should definitely be considered. 

This feels highly targeted :) 

Noted, though! I find it quite difficult to make good technical progress, manage the nonprofit basics, and do marketing/outreach, with a tiny team. (Mainly just me right now). But would like to improve. 

We've recently updated the models for SquiggleAI, adding Sonnet 4.5, Haiku 4.5, and Grok Code Fast 1.

Initial tests show promising results, though probably not game-changing. Will run further tests on numeric differences.

People are welcome to use (for free!) It's a big unreliable, so feel free to run a few times on each input. 

https://quantifieduncertainty.org/posts/updated-llm-models-for-squiggleai/

Quickly -> I sympathize with these arguments, but I see the above podcast as practically a different topic.  Could be a good separate blog post on its own. 

This got me to investigate Ed Lee a bit. Seems like a sort of weird situation.

Good point about it coming from a source. But looking at that, I think that that blog post was had a similarly clickbait headline, though more detailed ("Anthropic faces potential business-ending liability in statutory damages after Judge Alsup certifies class action by Bartz"). 

The analysis in question also looks very rough to me. Like a quick sketch / blog post. 

I'd guess that if you'd have most readers here estimate what the chances seem that this will actually force the company to close down or similar, after some investigation, it would be fairly minimal. 

The article broadly seems informative, but I really don't like the clickbait headline. 

"Potentially Business-Ending"?

I did a quick look at the Manifold predictions. In this (small) market, there's a 22% chance given to "Will Anthropic be ordered to pay $1B+ in damages in Bartz v. Anthropic?" (note that even $1B would be far from "business-ending"). 

And larger forecasts of the overall success of Anthropic have barely changed. 

Load more