No, it would not. Per the frame that makes the argument more compelling, as I said; "Secondly, they may be even more successful in building significantly more powerful AI, transforming the world. Obviously, the nonprofit would become far wealthier, but given OpenAI’s mandate, it also becomes irrelevant."
But within the first option, if they are actually more than doubling their value yearly (as implied by 100x in 6 years, which matches their current revenue growth continuing at the current rate,) if they give away $20 billion per year, starting at their current valuation of $150 billion, they end up giving away only a small fraction of their eventual endowment - about 13%. And in that case, given that it's hard to spend 13% of $150b effectively, it's going to be far harder to spend any large percentage of their $15 trillion endowment in later years!
To be clear about my views, I do support spending on local community orgs - but "local organizations or those where I have personal affiliations or feel responsibilities towards are also important to me - but... this is conceptually separate from giving charity effectively, and as I mentioned, I donate separately from the 10% dedicated to charity."
I am not saying everyone is malicious, nor that no-one cares - but belief fixation can happen with a moderate non-majority proportion of a population incentivized to believe it is true, and something like this isn't incompatible with good motivations, and it is about as hard to refute once people claim it's true as it would to establish the claim in the first place.
Very briefly, it's unclear to me how much of the claimed impact of meta and community building orgs is counterfactual. The incentives created here are quite solidly against any impartial analysis. Also, as I've argued before, as an almost deontological point, I'm uncomfortable with people funding their social circle and community and putting what would otherwise be considered spending on dues in community organizations as their 10% giving to effective charity.
pretty well established though in the activist world that is often effective to pick one specific thing to get a"win" on, at the right time.
It may be well established, but given the incentives in that world, it's unlikely that the belief would need to correlate with truth to have become well established.
Strong agree that absent new approaches the tailwind isn't enough - but it seems unclear that pretraining scaling doesn't have farther to go, and it seems that current approaches with synthetic data and training via RL to enhance one-shot performance have room left for significant improvement.
I also don't know how much room there is left until we hit genius level AGI or beyond, and at that point even if we hit a wall, more scaling isn't required, as the timeline basically ends.
the extinction scenario that Eliezer Yudkowsky has described. His scenario depends on the premise that AI systems could quickly develop advanced molecular nanotechnology capable of matching or even surpassing the sophistication of biological systems.
But that's not the claim he makes!
To quote:
The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.
That's a really good point, thanks! Though if they don't have short timelines, it seems like they are being quite irresponsible as board members not preventing Sam from making increasingly large bets on scaling. Of course, they might not be willing to cross him; the current board presumably learned the lesson from Ilya's ill-fated decision.
Also, you need what are currently considered almost implausibly long timelines to not think that them spending more quickly makes sense.