M

MichaelDickens

5847 karmaJoined
mdickens.me

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).

I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff like [investing](https://mdickens.me/category/finance/) and [fitness](https://mdickens.me/category/fitness/).

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
828

What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc.

Some ideas:

  • Donate to orgs that are working to AI risk (which ones, though?)
  • Write letters to policy-makers expressing your concerns
  • Be public about your concerns. Normalize caring about x-risk
  1. This portfolio has nothing to do with chasing past performance. According to standard finance theory (including making a bunch of assumptions about rational investors, zero transaction costs etc.), the global market portfolio is the theoretically optimal portfolio to hold. The idea is that if markets are efficient then you can't predict which asset classes will outperform, so you should just hold some of everything.
  2. This portfolio doesn't require any rebalancing. It's the global market portfolio (or it was as of 2015). If you have 18% in US stocks because they represent 18% of the global market portfolio, and then US stocks go up to 20%, your holdings also go up to 20% so you don't have to do anything. Although it might still take some work to manage if you're adding more money to the portfolio on a monthly basis (or whatever) because you need to deploy your new investment in the correct proportions.
  3. Realistically you can get pretty close to the global market portfolio by buying global stocks + global bonds and not worry about the smaller positions. You can do a 2-fund portfolio with 50% VT, 50% BNDW.
  4. 1973 to 2013 isn't arbitrary. 1973 was the chosen start year because that's the earliest date at which we have good data on global equity returns.

Ah I see what you're saying. I can't recall seeing much discussion on this. My guess is that it would be hard to develop a non-superintelligent AI that poses an extinction risk but I haven't really thought about it. It does sound like something that deserves some thought.

When people raise particular concerns about powerful AI, such as risks from synthetic biology, they often talk about them as risks from general AI, but they could come from narrow AI, too. For example some people have talked about the risk that narrow AI could be used by humans to develop dangerous engineered viruses.

My take is that I think there are strong arguments for why AI x-risk is overwhelmingly more important than narrow AI, and I think those arguments are the main reason why x-risk gets more attention among EAs.

I agree with David's comment. These sorts of ethical dilemmas are puzzles for everyone, not just for utilitarianism.

And in the case of insect welfare, rights-based theories produce more puzzling puzzles because it's unclear how to reckon with tradeoffs.

There is a related concern where most of the big funders either have investments in AI companies, or have close ties to people with investments in AI companies. This biases them toward funding activities that won't slow down AI development. So the more effective an org is at putting the brakes on AGI, the harder a time it will have getting funded.*

Props to Jaan Tallinn, who is an early investor in Anthropic yet has funded orgs that want to slow down AI (including CAIP).

*I'm not confident that this is a factor in why CAIP has struggled to get funding, but I wouldn't be surprised if it was.

In general, writing criticism feels more virtuous than writing praise.

FWIW it feels the opposite to me. Writing praise feels good; writing criticism feels bad.

(I guess you could say that it's virtuous to push through those bad feelings and write the criticism anyway? I don't get any positive feelings or self-image from following that supposed virtue, though.)

I think this is an important point that's worth saying.

For what it's worth, I am not super pessimistic that "solving alignment" is something that can be solved in principle. But I'm quite concerned that the safety-minded AI companies seem to completely ignore the philosophical problems with AI alignment. They all operate under the assumption that alignment is purely an ML problem and they can solve it by basically doing ML research. Which I expect is false (credence: 70%).

Wei Dai has written some good stuff about the problem of "philosophical competence". See here for a collection of his writings on the topic.

This is a good point that I hadn't thought of when I wrote my poll answer—a "gradual disempowerment" risk scenario would probably not kill all sentient animals, and it represents a non-trivial percentage of AI risk.

MichaelDickens
6
1
0
60% ➔ 40% agree

The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet

If an existential catastrophe occurs, it will probably (~90%) be AI, and an AI that kills all humans would probably also (~80%) kill all sentient animals.

The argument against killing all animals is that they are less likely than humans to interfere with the AI's goals. The argument in favor is that they are made of atoms.

Edit: Updated downward a bit based on Denkenberger's comment.

Load more