I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff like [investing](https://mdickens.me/category/finance/) and [fitness](https://mdickens.me/category/fitness/).
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
Ah I see what you're saying. I can't recall seeing much discussion on this. My guess is that it would be hard to develop a non-superintelligent AI that poses an extinction risk but I haven't really thought about it. It does sound like something that deserves some thought.
When people raise particular concerns about powerful AI, such as risks from synthetic biology, they often talk about them as risks from general AI, but they could come from narrow AI, too. For example some people have talked about the risk that narrow AI could be used by humans to develop dangerous engineered viruses.
I agree with David's comment. These sorts of ethical dilemmas are puzzles for everyone, not just for utilitarianism.
And in the case of insect welfare, rights-based theories produce more puzzling puzzles because it's unclear how to reckon with tradeoffs.
There is a related concern where most of the big funders either have investments in AI companies, or have close ties to people with investments in AI companies. This biases them toward funding activities that won't slow down AI development. So the more effective an org is at putting the brakes on AGI, the harder a time it will have getting funded.*
Props to Jaan Tallinn, who is an early investor in Anthropic yet has funded orgs that want to slow down AI (including CAIP).
*I'm not confident that this is a factor in why CAIP has struggled to get funding, but I wouldn't be surprised if it was.
In general, writing criticism feels more virtuous than writing praise.
FWIW it feels the opposite to me. Writing praise feels good; writing criticism feels bad.
(I guess you could say that it's virtuous to push through those bad feelings and write the criticism anyway? I don't get any positive feelings or self-image from following that supposed virtue, though.)
I think this is an important point that's worth saying.
For what it's worth, I am not super pessimistic that "solving alignment" is something that can be solved in principle. But I'm quite concerned that the safety-minded AI companies seem to completely ignore the philosophical problems with AI alignment. They all operate under the assumption that alignment is purely an ML problem and they can solve it by basically doing ML research. Which I expect is false (credence: 70%).
Wei Dai has written some good stuff about the problem of "philosophical competence". See here for a collection of his writings on the topic.
The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet
If an existential catastrophe occurs, it will probably (~90%) be AI, and an AI that kills all humans would probably also (~80%) kill all sentient animals.
The argument against killing all animals is that they are less likely than humans to interfere with the AI's goals. The argument in favor is that they are made of atoms.
Edit: Updated downward a bit based on Denkenberger's comment.
What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc.
Some ideas: