To be clear about my views, I do support spending on local community orgs - but "local organizations or those where I have personal affiliations or feel responsibilities towards are also important to me - but... this is conceptually separate from giving charity effectively, and as I mentioned, I donate separately from the 10% dedicated to charity."
I am not saying everyone is malicious, nor that no-one cares - but belief fixation can happen with a moderate non-majority proportion of a population incentivized to believe it is true, and something like this isn't incompatible with good motivations, and it is about as hard to refute once people claim it's true as it would to establish the claim in the first place.
Very briefly, it's unclear to me how much of the claimed impact of meta and community building orgs is counterfactual. The incentives created here are quite solidly against any impartial analysis. Also, as I've argued before, as an almost deontological point, I'm uncomfortable with people funding their social circle and community and putting what would otherwise be considered spending on dues in community organizations as their 10% giving to effective charity.
pretty well established though in the activist world that is often effective to pick one specific thing to get a"win" on, at the right time.
It may be well established, but given the incentives in that world, it's unlikely that the belief would need to correlate with truth to have become well established.
Strong agree that absent new approaches the tailwind isn't enough - but it seems unclear that pretraining scaling doesn't have farther to go, and it seems that current approaches with synthetic data and training via RL to enhance one-shot performance have room left for significant improvement.
I also don't know how much room there is left until we hit genius level AGI or beyond, and at that point even if we hit a wall, more scaling isn't required, as the timeline basically ends.
the extinction scenario that Eliezer Yudkowsky has described. His scenario depends on the premise that AI systems could quickly develop advanced molecular nanotechnology capable of matching or even surpassing the sophistication of biological systems.
But that's not the claim he makes!
To quote:
The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.
Mostly agree. I've been involved in local orgs a bit more than most people in EA, and grew up in a house where my parents were often serving terms on different synagogue and school boards, and my wife has continued her family's similar tradition - so I strongly agree that passionate alignment changes things - but even that rarely leads to boards setting the strategic direction.
I think a large part of this is that strategy is hard, as you note, and it's very high context for orgs. I still wonder about who is best placed to track priority drift, and about how much we want boards to own the strategic direction; it would be easy, but I think very unhelpful, for the board to basically just do what Holden suggests, and only be in charge of the CEO - because a lot of value from the board is, or can be, their broader strategic views and different knowledge. And for local orgs, that happens much more, the leaders need to convince board members to do things or make changes, rather than doing it on their own and getting vague approval from the board. But, as a last point, it seems hard to do lots of this for small orgs. Overhead from the board is costly, and I don't know how much effort we want to expect.
My board isn't the reason for the lack of clarity - and it certainly is my job to set the direction. I don't think any of them are particularly dissatisfied with the way I've set the org's agenda. But my conclusion is that I disagree somewhat with Holden's post that partly guided me in the past couple years, in that it's more situational, and there are additional useful roles for the board.
I'd find a breakdown informative, since the distribution both between different frontier firms and between safety and not seems really critical, at least in my view of the net impacts of a program. (Of course, none of this tells us counterfactual impact, which might be moving people on net either way.)
To forestall an obvious objection, I do not endorse the decision of OpenAI to use this structure, and there are many other problems. However, the above arguments should apply according to the views they profess, which seems important.