I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
Agreed that extreme power concentration is an important problem, and this is a solid writeup.
Regarding ways to reduce risk: My favorite solution (really a stopgap) to extreme power concentration is to ban ASI [until we know how to ensure it's safe], a solution that is notably absent from the article's list. I wrote more about my views here and about how I wish people would stop ignoring this option. It's bad that the 80K article did not consider what is IMO the best idea.
Good note. Also worth keeping in mind the base rate of companies going under. FTX committing massive fraud was weird; but a young, fast-growing, unprofitable company blowing up was decidedly predictable, and IMO the EA community was banking too hard on FTX money being real.
Plus the planning fallacy, i.e., if someone says they want to do something by some date, then it'll probably happen later than that.
My off-the-cuff guess is
The responsible thing to do is to go look at the balance of what experts in a field are saying, and in this case, they're fairly split
This is not a crux for me. I think if you were paying attention, it was not hard to be convinced that AI extinction risk was a big deal in 2005–2015, when the expert consensus was something like "who cares, ASI is a long way off." Most people in my college EA group were concerned about AI risk well before ML experts were concerned about it. If today's ML experts were still dismissive of AI risk, that wouldn't make me more optimistic.
SF and Berkeley and south bay (San Jose/Palo Alto area) all have pretty different climates. Going off my memory:
It's true that SF is usually cloudy but that's not the case for the whole bay area. Berkeley/Oakland is sunny more often than not.
"EA" isn't one single thing with a unified voice. Many EAs have indeed denounced OpenAI.
As an EA: I hereby denounce OpenAI. They have greatly increased AI extinction risk. The founding of OpenAI is a strong candidate for the worst thing to happen in history (time will tell whether this event leads to human extinction).
Wouldn't this sort of reasoning also say that FTX was justified in committing fraud if they could donate users' money to global health charities? They metaphorically conscripted their users to fight against a great problem. People in the developed world failed to coordinate to fund tractable global health interventions, and FTX attempted to fix this coordination problem by defrauding them.
(I don't think that's an accurate description of what FTX did, but it doesn't matter for the purposes of this analogy.)