I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
Donating at the end of the year.
I used to donate mid-year for the reasons you gave. The last couple years I donated at the end of the year because the EA Forum was running a donation election in early December, and I wanted to publish my "where I'm donating" post shortly before the donation election, and I don't want to donate until after I've published the post. But perhaps syncing with the donation election is less important and I should publish and donate mid-year instead?
I’ve been a fool trying to influence people who are on the AI industry’s money and glory payroll. I’m going to take my own advice now, write you off, and focus on the moral majority who wants to protect the world.
I've donated $30,000 to PauseAI. Some of your past posts played a role in that, such as The Case for AI Safety Advocacy to the Public and Pausing AI is the only safe approach to digital sentience. I don't think writing off people like me is a good idea.
You should all be ashamed of your complicity in bringing about potentially world-ending technology.
I am literally donating to PauseAI. I don't think you are being fair. I fully agree that some EAs are directly increasing x-risk by working on AI development, and they should stop doing that. I don't think it's fair to paint all of us with that brush.
Wouldn't this sort of reasoning also say that FTX was justified in committing fraud if they could donate users' money to global health charities? They metaphorically conscripted their users to fight against a great problem. People in the developed world failed to coordinate to fund tractable global health interventions, and FTX attempted to fix this coordination problem by defrauding them.
(I don't think that's an accurate description of what FTX did, but it doesn't matter for the purposes of this analogy.)
Agreed that extreme power concentration is an important problem, and this is a solid writeup.
Regarding ways to reduce risk: My favorite solution (really a stopgap) to extreme power concentration is to ban ASI [until we know how to ensure it's safe], a solution that is notably absent from the article's list. I wrote more about my views here and about how I wish people would stop ignoring this option. It's bad that the 80K article did not consider what is IMO the best idea.
Good note. Also worth keeping in mind the base rate of companies going under. FTX committing massive fraud was weird; but a young, fast-growing, unprofitable company blowing up was decidedly predictable, and IMO the EA community was banking too hard on FTX money being real.
Plus the planning fallacy, i.e., if someone says they want to do something by some date, then it'll probably happen later than that.
My off-the-cuff guess is
Something like that, yeah.
ASI would have a level of autonomy and goal-directedness that's unlike any previous technology. The case for caring about AI risk doesn't work if you take too much of an outside view, you have to reason about what properties ASI would have.