Ozzie Gooen

11369 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
1083

Topic contributions
4

It seems like recently (say, the last 20 years) inequality has been rising. (Editing, from comments)

Right now, the top 0.1% of wealthy people in the world are holding on to a very large amount of capital.

(I think this is connected to the fact that certain kinds of inequality have increased in the last several years, but I realize now my specific crossed-out sentence above led to a specific argument about inequality measures that I don't think is very relevant to what I'm interested in here.)

On the whole, it seems like the wealthy donate incredibly little (a median of less than 10% of their wealth), and recently they've been good at keeping their money from getting taxed.

I don't think that people are getting less moral, but I think it should be appreciated just how much power and wealth is in the hands of the ultra wealthy now, and how little of value they are doing with that.

Every so often I discuss this issue on Facebook or other places, and I'm often surprised by how much sympathy people in my network have for these billionaires (not the most altruistic few, but these people on the whole). I suspect that a lot of this comes partially from [experience responding to many mediocre claims from the far-left] and [living in an ecosystem where the wealthy class is able to subtly use their power to gain status from the intellectual class.]

The top 10 known billionaires have easily $1T now. I'd guess that all EA-related donations in the last 10 years have been less than around $10B. (GiveWell says they have helped move $2.4B). 10 years ago, I assumed that as word got out about effective giving, many more rich people would start doing that. At this point it's looking less optimistic. I think the world has quite a bit more wealth, more key problems, and more understanding of how to deal with them then it ever had before, but still this hasn't been enough to make much of a dent in effective donation spending.

At the same time, I think it would be a mistake to assume this area is intractable. While it might not have improved much, in fairness, I think there was little dedicated and smart effort to improve it. I am very familiar with programs like The Giving Pledge and Founders Pledge. While these are positive, I suspect they absorb limited total funding (<$30M/yr, for instance.) They also follow one particular highly-cooperative strategy. I think most people working in this area are in positions where they need to be highly sympathetic to a lot of these people, which means I think that there's a gap of more cynical or confrontational thinking.

I'd be curious to see the exploration of a wide variety of ideas here. 

In theory, if we could move from these people donating say 3% of their wealth, to say 20%, I suspect that could unlock enormous global wins. Dramatically more than anything EA has achieved so far. It doesn't even have to go to particularly effective places - even ineffective efforts could add up, if enough money is thrown at them.

Of course, this would have to be done gracefully. It's easy to imagine a situation where the ultra-wealthy freak out and attack all of EA or similar. I see work to curtail factory farming as very analogous, and expect that a lot of EA work on that issue has broadly taken a sensible approach here. 

From The Economist, on "The return of inheritocracy"

> People in advanced economies stand to inherit around $6trn this year—about 10% of GDP, up from around 5% on average in a selection of rich countries during the middle of the 20th century. As a share of output, annual inheritance flows have doubled in France since the 1960s, and nearly trebled in Germany since the 1970s. Whether a young person can afford to buy a house and live in relative comfort is determined by inherited wealth nearly as much as it is by their own success at work. This shift has alarming economic and social consequences, because it imperils not just the meritocratic ideal, but capitalism itself.

> More wealth means more inheritance for baby-boomers to pass on. And because wealth is far more unequally distributed than income, a new inheritocracy is being born.

 

In ~2014, one major topic among effective altruists was "how to live for cheap."

There wasn't much funding, so it was understood that a major task for doing good work was finding a way to live with little money.

Money gradually increased, peaking with FTX in 2022.

Now I think it might be time to bring back some of the discussions about living cheaply.

Arguably, around FTX, it was better. EA and FTX both had strong brands for a while. And there were worlds in which the risk of failure was low.

I think it's generally quite tough to get this aspect right though. I believe that traditionally, charities are reluctant to get their brands associated with large companies, due to the risks/downsides. We don't often see partnerships between companies and charities (or say, highly-ideological groups) - I think that one reason why is that it's rarely in the interests of both parties. 

Typically companies want to tie their brands to very top charities, if anyone. But now EA has a reputational challenge, so I'd expect that few companies/orgs want to touch "EA" as a thing.

Arguably influencers are a often a safer option - note that EA groups like GiveWell and 80k are already doing partnerships with influencers. As in, there's a decent variety of smart YouTube channels and podcasts that hold advertisements for 80k/GiveWell. I feel pretty good about much of this.

Arguably influencers are crafted in large part to be safe bets. As in, they're very incentivized to not go crazy, and they have limited risks to worry about (given they represent very small operations). 
 

I just had Claude do three attempts at what a version of the "Voice in the Room" chart would look like as an app, targeting AI Policy. The app is clearly broken, but I think it can act as an interesting experiment. 

Here the influencing parties are laid out in consecutive rings. There are lines connecting connected organizations. There's also a lot of other information here. 

I agree. 

I didn’t mean to suggest your post suggested otherwise - I was just focusing on another part of this topic. 

I mainly agree.

I previously was addressing Michael's more limited point, "I don't think government competence is what's holding us back from having good AI regulations, it's government willingness."

All that said, separately, I think that "increasing government competence" is often a good bet, as it just comes with a long list of benefits.

But if one believes that AI will happen soon, and that a major bottleneck is "getting the broad public to trust the US government more, with the purpose of then encouraging AI reform", that seems like a dubious strategy. 

(Potential research project, curious to get feedback)

I've been thinking a lot about how to do quantitative LLM evaluations of the value of various (mostly-EA) projects.

We'd have LLMs give their best guesses at the value of various projects/outputs. These would be mediocre at first, but help us figure out how promising this area is, and where we might want to go with it.

The first idea that comes to mind is "Estimate the value in terms of [dollars, from a certain EA funder] as a [probability distribution]". But this quickly becomes a mess. I think this couples a few key uncertainties into one value. This is probably too hard for early experiments.

A more elegant example would be "relative value functions". This is theoretically nicer, but the infrastructure would be more expensive. It helps split up some of the key uncertainties, but would require a lot of technical investment.

One option that might be interesting is asking for a simple rank order. "Just order these projects in terms of the expected value." We can definitely score rank orders, even though doing so is a bit inelegant.

So one experiment I'm imagining is:

  1. We come up with a list of interesting EA outputs. Say, a combination of blog posts, research articles, interventions, etc. From this, we form a list of maybe 20 to 100 elements. These become public.
  2. We then ask people to compete to rank these. A submission would be [an ordering of all the elements] and an optional [document defending their ordering].
  3. We feed all of the entries in (2) into an LLM evaluation system. This would come with a lengthy predefined prompt. It would take in all of the provided orderings and all the provided defenses. It then outputs its own ordering.
  4. We then score all of the entries in (2), based on how well they match the result of (3).
  5. The winner gets a cash prize. Ideally, all submissions would become public.

This is similar to this previous competition we did.

Questions:

1. "How would you choose which projects/items to analyze?"
One option could be to begin with a mix of well-regarded posts on the EA Forum. Maybe we keep things to a limited domain for now (just X-risk), but have cover a spectrum of different amounts of karma.

2. "Wouldn't the LLM do a poor job? Why not humans?"
Having human judges at the end of this would add a lot of cost. It could easily make the project 2x as expensive. Also, I think it's good for us to learn how to use LLMs for evaluating these competitions, as it has more long-term potential.

3. "The resulting lists would be poor quality"
I think the results would be interesting, for a few reasons. I'd expect the results to be better than what many individuals would come up with. I also think it's really important we start somewhere. It's very easy to delay things until we have something perfect- then for that to never happen.

Thanks for the responses!

SB-1047 was adequately competently written (AFAICT). If we get more regulations at a similar level of competence, that would be reasonable.

Agreed

Getting regulators on board with what people want seems to me to be the best path to getting regulations in place.

I don't see it as either/or. I agree that pushing for regulations is a bigger priority than AI in government. Right now the former is getting dramatically more EA resources and I'd expect that to continue. But I think the latter are getting almost none, and that doesn't seem right to me. 
 

Suppose it turned out Microsoft Office was dangerous. Surely the fact that Office is so embedded in government procedures would make it less likely to get banned?

I worry we're getting into a distant hypothetical. I'd equate this to, "Given the Government is using Microsoft Office, are they likely to try to make sure that future versions of Microsoft Office are better? Especially, in a reckless way?" 

Naively I'd expect a government that uses Microsoft Office to be one with a better understanding of the upsides and downsides of Microsoft Office.

I'd expect that most AI systems the Government would use would be fairly harmless (in terms of the main risks we care about). Like, things a few years old (and thus tested a lot in industry), with less computing power than would be ideal, etc. 

Related, I think that the US military has done good work to make high-reliability software, due to their need for it. (Though this is a complex discussion, as they obviously do a mix of things.)

I've been thinking a lot about this broad topic and am very sympathetic. Happy to see it getting more discussion.

I think this post correctly flags how difficult it is to get the government to change. 

At the same time, I imagine there might be some very clever strategies to get a lot of the benefits of AI without many of the normal costs of integration.

For example:

  1. The federal government makes heavy use of private contractors. These contractors are faster to adopt innovations like AI.
  2. There are clearly some subsets of the government that matter far more than others. And there are some that are much easier to improve than others.
  3. If AI strategy/intelligence is cheap enough, most of the critical work can be paid for by donors. For example, we have a situation where there's a think tank that uses AI to figure out the best strategies/plans for much of the government, and government officials can choose to pay attention to this.

Basically, I think some level of optimism is warranted, and would suggest more research into that area.

(This is all very similar to previous thinking on how forecasting can be useful to the government.)

I think you (Michael Dickens) are probably one of my favorite authors on your side of this, and I'm happy to see this discussion - though I myself am more on the other side.

Some quick responses
> I don't think government competence is what's holding us back from having good AI regulations, it's government willingness.

I assume it can clearly be a mix of both. Right now we're in a situation where many people barely trust the US government to do anything. A major argument for why the US government shouldn't regulate AI is that they often mess up things they try to regulate. This is a massive deal in a lot of the back-and-forth I've seen on the issue on Twitter.

I'd expect that if the US government were far more competent, people would trust it to take care of many more things, including high-touch AI oversight. 

> Increasing government dependency on AI systems could make policy-makers more reluctant to place restrictions on AI development because they would be hurting themselves by doing so. This is a very bad incentive.

This doesn't seem like a major deal to me. Like, the US government uses software a lot, but I don't see them "funding/helping software development", even though I really think they should. If I were them, I would have invested far more in open-source systems, for instance.

My quick impression is that a competent oversight and guiding of AI systems, carefully working through the risks and benefits, would be incredibly challenging, and I'd expect any human-lead government to make gigantic errors in it. Even attempts to "slow down AI" could easily backfire if not done well. For example, I think that Democratic attempts to increase migration in the last few years might have massively backfired. 

Load more