Ozzie Gooen

11622 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
1116

Topic contributions
4

It seems like recently (say, the last 20 years) inequality has been rising. (Editing, from comments)

Right now, the top 0.1% of wealthy people in the world are holding on to a very large amount of capital.

(I think this is connected to the fact that certain kinds of inequality have increased in the last several years, but I realize now my specific crossed-out sentence above led to a specific argument about inequality measures that I don't think is very relevant to what I'm interested in here.)

On the whole, it seems like the wealthy donate incredibly little (a median of less than 10% of their wealth), and recently they've been good at keeping their money from getting taxed.

I don't think that people are getting less moral, but I think it should be appreciated just how much power and wealth is in the hands of the ultra wealthy now, and how little of value they are doing with that.

Every so often I discuss this issue on Facebook or other places, and I'm often surprised by how much sympathy people in my network have for these billionaires (not the most altruistic few, but these people on the whole). I suspect that a lot of this comes partially from [experience responding to many mediocre claims from the far-left] and [living in an ecosystem where the wealthy class is able to subtly use their power to gain status from the intellectual class.]

The top 10 known billionaires have easily $1T now. I'd guess that all EA-related donations in the last 10 years have been less than around $10B. (GiveWell says they have helped move $2.4B). 10 years ago, I assumed that as word got out about effective giving, many more rich people would start doing that. At this point it's looking less optimistic. I think the world has quite a bit more wealth, more key problems, and more understanding of how to deal with them then it ever had before, but still this hasn't been enough to make much of a dent in effective donation spending.

At the same time, I think it would be a mistake to assume this area is intractable. While it might not have improved much, in fairness, I think there was little dedicated and smart effort to improve it. I am very familiar with programs like The Giving Pledge and Founders Pledge. While these are positive, I suspect they absorb limited total funding (<$30M/yr, for instance.) They also follow one particular highly-cooperative strategy. I think most people working in this area are in positions where they need to be highly sympathetic to a lot of these people, which means I think that there's a gap of more cynical or confrontational thinking.

I'd be curious to see the exploration of a wide variety of ideas here. 

In theory, if we could move from these people donating say 3% of their wealth, to say 20%, I suspect that could unlock enormous global wins. Dramatically more than anything EA has achieved so far. It doesn't even have to go to particularly effective places - even ineffective efforts could add up, if enough money is thrown at them.

Of course, this would have to be done gracefully. It's easy to imagine a situation where the ultra-wealthy freak out and attack all of EA or similar. I see work to curtail factory farming as very analogous, and expect that a lot of EA work on that issue has broadly taken a sensible approach here. 

From The Economist, on "The return of inheritocracy"

> People in advanced economies stand to inherit around $6trn this year—about 10% of GDP, up from around 5% on average in a selection of rich countries during the middle of the 20th century. As a share of output, annual inheritance flows have doubled in France since the 1960s, and nearly trebled in Germany since the 1970s. Whether a young person can afford to buy a house and live in relative comfort is determined by inherited wealth nearly as much as it is by their own success at work. This shift has alarming economic and social consequences, because it imperils not just the meritocratic ideal, but capitalism itself.

> More wealth means more inheritance for baby-boomers to pass on. And because wealth is far more unequally distributed than income, a new inheritocracy is being born.

 

I'm in favor of exploring interesting areas, and broadly sympathetic to there being more work in this area. 

I'd quickly note that I think the framing of "megaproject" seems distracting to me. I think the phrase really made sense in a very narrow window of time when EAs were flush with cash, and/or for very specific projects that really need it. But generally "mega-project" is an anti-pattern. 

Yeah I definitely have this in my head when thinking about how to run the EA Forum. But I haven't made a commitment to personally run the site for five years (I'm not a commitment sort of person in general). Maybe that means I'm not a good fit for this role?

I want to quickly flag that this sounds very wrong to me. In Oliver's case, he was the CEO of that org, and if he left then, I think it's very likely the organization would have died. 

In comparison, I think CEA is in a much more robust place. There's a different CEO, and it's an important enough organization that I'd expect that if the CEO left, there would be sufficient motivation to replace that person with someone at least decent.

I think that it would be nice for CEA to make some commitments here. At very least, if it were the case that the forum was in great risk of closing in a few years, I assume many people here would want to know (and start migrating to other solutions). But I think CEA can make the commitments without you having to be personally committed. 

I was thinking of Disagreeing.

On one hand, I'm very supportive of more people doing open-source development on things like this.

On the other, I think some people might think, "It's open-source, and our community has tech people around. Therefore, people could probably do the maintenance work for free."

From experience, it's incredibly difficult to actually get useful open-source contributors, especially for long-term maintenance of apps that aren't extraordinarily interesting and popular. So it can be a nice thing to encourage, but a tiny part of the big-picture strategic planning. 

Quick thoughts:

  1. I appreciate the write up and transparency.
  2. I'm a big fan of engineering work. At the same time, I realize it's expensive, and it seems like we don't have much money to work with these days. I think this makes it tricky to find situations where it's clearly a good fit with the existing donors.
  3. Bigger-picture, I imagine many readers here would have little idea of what "new engineering work" would really look like. It's tough to do a lot with a tiny team, as you point out. I could imagine some features helping the forum, but would also expect many changes to be experimental.
  4. "Everyone going to the Reddit thread, at once" seems doomed to me, as you point out. But I'd feel better about gradual things. Maybe we could have someone try moderating Reddit for a few months, and see if we can make it any better first. "Transitioning the EA Forum" could come very late, only if we're able to show good success on a smaller scale.
  5. That said, I'm skeptical of Reddit as a primary forum. I don't know of other smart Academic-aligned groups who have really made it official infrastructure for them. It seems to me like Reddits are often branches of the overall Reddit community, which is quite separate from the EA community, so it will be difficult to find the slice that we want. I feel better about other paid Forum providers, if we go the route of shutting down the EA Forum.
  6. I think that the EA Discords/Slacks could use more support. Perhaps we shouldn't try to have "One True Platform", but have a variety of platforms that work with different sets of people.
  7. As I think about it, I think it's quite possible that many of the obvious technical improvements for the EA Forum, at this point, won't translate nicely to user growth. It's just very hard to make user growth happen, especially after a few years of tech improvements.
  8. I think the EA Forum has major problems with scaling, and that this is a hard tech problem. It's hard to cleanly split the community into sub-communities (I know there's been some attempts here). So right now I think we have the issue that we can only have one internet community (to some extent), and this scares a bunch of people away.
  9. Personally, what feels most missing to me around EA online is leadership/communication about the big issues, some smart+effective moderation (this is really tough), and experimentation on online infrastructure outside the EA Forum (see Discords, online courses, online meetups, maybe new online platforms, etc).  I think there's a lot of work to do here, but would flag that it's likely pretty hit-or-miss, maybe making it a more difficult ask for funders. 

Anyway, this was just my quick take. Your team obviously has a lot more context. 

I'm overall appreciative to the team and to the funders who have supported the team this long. 

I went back-and-forth on this topic with Claude. I was hoping that it would re-derive my points, but getting it to provide decent criticism took a bit more time than I was expecting. 

That said, I think with a few prompts (like asking it what it thought of those specific points), it was able to be useful. 

https://claude.ai/share/00cbbfad-6d97-4ad8-9831-5af231d36912

Happy to see genuine attempts at this area. 

 We’re seeking feedback on our cost-effectiveness model and scaling plan

The cost-effectiveness you mentioned is incredibly strong, which made me suspicious. "$5 per income doubling" is high, indeed.

I've worked in software for most of my professional life. Going through this more, I'm fairly skeptical of the inputs to your model.

  1. Good web applications are a ton of work, even if they're reusing AI in some way. I have a hard time picturing how much you could really get for $2M, in most settings. (Unless perhaps the founding team is working for a huge pay gap or something, but then this would change the effective cost-effectiveness).
  2. I don't see much discussion of marketing/distribution expenses. I'd expect these to be high.
  3. The AI space is rapidly changing. This model takes advantage of recent developments, but doesn't seem to assume there will be huge changes in the future. If there are, the math changes a lot. I'd expect a mix of [better competition], [the tool quickly becoming obsolete], and [the employment landscape changes so much that the income doubling becomes suspicious].
  4. You mention the results of academic studies, but my impression is that you don't yet have scientific results of people using your specific app. I'd be very skeptical for how much you can generalize the studies. I'd naively expect it would be difficult to motivate users to actually spend much time on the app. 

In the startup world, business models in the very early stages of development are treated with tremendous suspicion. I think we have incredibly large uncertainty bounds (with lots more probability of failure), until we see some more serious use. 

Overall, this write-up reminds me of a lot of what I hear by early entrepreneurs. I like the enthusiasm, but think that it's a fair bit overoptimistic. 

All that said, it's still very possible it's still a good opportunity. Often in the early stages, people would expect a lot of experimentation and change to the specific product. 

This seems great to me, kudos for organizing. I'm sure a bunch of people will be interested to see the outcome of this.

If it's successful, I imagine it might be able to be scaled. 

Similar to "Greenwashing" and "Safetywashing", I've been thinking about "Intellectual Washing."

The pattern works as: "Find someone who seems like an intellectual who somewhat aligns with your position. Then claim you have strong intellectual (and by extension, logical) support for your views."


This is easiest to see in sides that you disagree with.

For example, MAGA gets intellectual cred from "The dark enlightenment" / Curtis Yarvin / Peter Thiel / etc. But I'm sure Trump never listened to any of these people, and was likely barely influenced by them. [1]

Hitler famously claimed alignment with Nietzche, and had support from Heidegger. Note that Nietzche didn't agree with this. And I'd expect Hitler engaged very little with Heidegger's ideas.

There's a structural risk for intellectuals: their work can be appropriated not as a nuanced set of ideas to be understood, but as legitimizing tokens for powerful interests.

The dynamics that enable this include:
- The difficulty of making a living or gaining attention as a serious thinker
- Public resource/interest constraints around complex topics
- The ready opportunity to be used as a simple token of support for pre-existing agendas
 




Note: There's a long list of types of "X-washing." There's an interesting discussion to the best terminology for this are, but I suspect most readers won't find that particularly interesting. One related concept is that of "selling out", sometimes where an artist with street cred would pair up with a large brand/label or similar. 

[1] While JD Vance might represent some genuine intellectual influence, and Thiel may have achieved specific narrow technical implementations, these appear relatively minor in the broader context of policy influence.

I assumed it's been mostly dead for a while (haven't heard about it for a few months). I'm very supportive of it, would like to see it (and more) do well. 

Load more