I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
I'm in favor of exploring interesting areas, and broadly sympathetic to there being more work in this area.
I'd quickly note that I think the framing of "megaproject" seems distracting to me. I think the phrase really made sense in a very narrow window of time when EAs were flush with cash, and/or for very specific projects that really need it. But generally "mega-project" is an anti-pattern.
Yeah I definitely have this in my head when thinking about how to run the EA Forum. But I haven't made a commitment to personally run the site for five years (I'm not a commitment sort of person in general). Maybe that means I'm not a good fit for this role?
I want to quickly flag that this sounds very wrong to me. In Oliver's case, he was the CEO of that org, and if he left then, I think it's very likely the organization would have died.
In comparison, I think CEA is in a much more robust place. There's a different CEO, and it's an important enough organization that I'd expect that if the CEO left, there would be sufficient motivation to replace that person with someone at least decent.
I think that it would be nice for CEA to make some commitments here. At very least, if it were the case that the forum was in great risk of closing in a few years, I assume many people here would want to know (and start migrating to other solutions). But I think CEA can make the commitments without you having to be personally committed.
I was thinking of Disagreeing.
On one hand, I'm very supportive of more people doing open-source development on things like this.
On the other, I think some people might think, "It's open-source, and our community has tech people around. Therefore, people could probably do the maintenance work for free."
From experience, it's incredibly difficult to actually get useful open-source contributors, especially for long-term maintenance of apps that aren't extraordinarily interesting and popular. So it can be a nice thing to encourage, but a tiny part of the big-picture strategic planning.
Quick thoughts:
Anyway, this was just my quick take. Your team obviously has a lot more context.
I'm overall appreciative to the team and to the funders who have supported the team this long.
I went back-and-forth on this topic with Claude. I was hoping that it would re-derive my points, but getting it to provide decent criticism took a bit more time than I was expecting.
That said, I think with a few prompts (like asking it what it thought of those specific points), it was able to be useful.
https://claude.ai/share/00cbbfad-6d97-4ad8-9831-5af231d36912
Happy to see genuine attempts at this area.
We’re seeking feedback on our cost-effectiveness model and scaling plan
The cost-effectiveness you mentioned is incredibly strong, which made me suspicious. "$5 per income doubling" is high, indeed.
I've worked in software for most of my professional life. Going through this more, I'm fairly skeptical of the inputs to your model.
In the startup world, business models in the very early stages of development are treated with tremendous suspicion. I think we have incredibly large uncertainty bounds (with lots more probability of failure), until we see some more serious use.
Overall, this write-up reminds me of a lot of what I hear by early entrepreneurs. I like the enthusiasm, but think that it's a fair bit overoptimistic.
All that said, it's still very possible it's still a good opportunity. Often in the early stages, people would expect a lot of experimentation and change to the specific product.
Similar to "Greenwashing" and "Safetywashing", I've been thinking about "Intellectual Washing."
The pattern works as: "Find someone who seems like an intellectual who somewhat aligns with your position. Then claim you have strong intellectual (and by extension, logical) support for your views."
This is easiest to see in sides that you disagree with.
For example, MAGA gets intellectual cred from "The dark enlightenment" / Curtis Yarvin / Peter Thiel / etc. But I'm sure Trump never listened to any of these people, and was likely barely influenced by them. [1]
Hitler famously claimed alignment with Nietzche, and had support from Heidegger. Note that Nietzche didn't agree with this. And I'd expect Hitler engaged very little with Heidegger's ideas.
There's a structural risk for intellectuals: their work can be appropriated not as a nuanced set of ideas to be understood, but as legitimizing tokens for powerful interests.
The dynamics that enable this include:
- The difficulty of making a living or gaining attention as a serious thinker
- Public resource/interest constraints around complex topics
- The ready opportunity to be used as a simple token of support for pre-existing agendas
Note: There's a long list of types of "X-washing." There's an interesting discussion to the best terminology for this are, but I suspect most readers won't find that particularly interesting. One related concept is that of "selling out", sometimes where an artist with street cred would pair up with a large brand/label or similar.
[1] While JD Vance might represent some genuine intellectual influence, and Thiel may have achieved specific narrow technical implementations, these appear relatively minor in the broader context of policy influence.
It seems like recently (say, the last 20 years) inequality has been rising. (Editing, from comments)Right now, the top 0.1% of wealthy people in the world are holding on to a very large amount of capital.
(I think this is connected to the fact that certain kinds of inequality have increased in the last several years, but I realize now my specific crossed-out sentence above led to a specific argument about inequality measures that I don't think is very relevant to what I'm interested in here.)
On the whole, it seems like the wealthy donate incredibly little (a median of less than 10% of their wealth), and recently they've been good at keeping their money from getting taxed.
I don't think that people are getting less moral, but I think it should be appreciated just how much power and wealth is in the hands of the ultra wealthy now, and how little of value they are doing with that.
Every so often I discuss this issue on Facebook or other places, and I'm often surprised by how much sympathy people in my network have for these billionaires (not the most altruistic few, but these people on the whole). I suspect that a lot of this comes partially from [experience responding to many mediocre claims from the far-left] and [living in an ecosystem where the wealthy class is able to subtly use their power to gain status from the intellectual class.]
The top 10 known billionaires have easily $1T now. I'd guess that all EA-related donations in the last 10 years have been less than around $10B. (GiveWell says they have helped move $2.4B). 10 years ago, I assumed that as word got out about effective giving, many more rich people would start doing that. At this point it's looking less optimistic. I think the world has quite a bit more wealth, more key problems, and more understanding of how to deal with them then it ever had before, but still this hasn't been enough to make much of a dent in effective donation spending.
At the same time, I think it would be a mistake to assume this area is intractable. While it might not have improved much, in fairness, I think there was little dedicated and smart effort to improve it. I am very familiar with programs like The Giving Pledge and Founders Pledge. While these are positive, I suspect they absorb limited total funding (<$30M/yr, for instance.) They also follow one particular highly-cooperative strategy. I think most people working in this area are in positions where they need to be highly sympathetic to a lot of these people, which means I think that there's a gap of more cynical or confrontational thinking.
I'd be curious to see the exploration of a wide variety of ideas here.
In theory, if we could move from these people donating say 3% of their wealth, to say 20%, I suspect that could unlock enormous global wins. Dramatically more than anything EA has achieved so far. It doesn't even have to go to particularly effective places - even ineffective efforts could add up, if enough money is thrown at them.
Of course, this would have to be done gracefully. It's easy to imagine a situation where the ultra-wealthy freak out and attack all of EA or similar. I see work to curtail factory farming as very analogous, and expect that a lot of EA work on that issue has broadly taken a sensible approach here.
From The Economist, on "The return of inheritocracy"
> People in advanced economies stand to inherit around $6trn this year—about 10% of GDP, up from around 5% on average in a selection of rich countries during the middle of the 20th century. As a share of output, annual inheritance flows have doubled in France since the 1960s, and nearly trebled in Germany since the 1970s. Whether a young person can afford to buy a house and live in relative comfort is determined by inherited wealth nearly as much as it is by their own success at work. This shift has alarming economic and social consequences, because it imperils not just the meritocratic ideal, but capitalism itself.
> More wealth means more inheritance for baby-boomers to pass on. And because wealth is far more unequally distributed than income, a new inheritocracy is being born.