M

MichaelDickens

4864 karmaJoined

Bio

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.

I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.

My favorite things that I've written: https://mdickens.me/favorite-posts/

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
734

On that framing, I agree that that's something that happens and that we should be able to anticipate will happen.

Thanks for the comment! Disagreeing with my proposed donations is the most productive sort of disagreement. I also appreciate hearing your beliefs about a variety of orgs.


A few weeks ago, I read your back-and-forth with Holly Elmore about the "working with the Pentagon" issue. This is what I thought at the time (IIRC):

  • I agree that it's not good to put misleading messages in your protests.
  • I think this particular instance of misleadingness isn't that egregious, it does decrease my expectation of the value of PauseAI US's future protests but not by a huge margin. If this was a recurring pattern, I'd be more concerned.
  • Upon my first reading, it was unclear to me what your actual objection was, so I'm not surprised that Holly also (apparently) misunderstood it. I had to read through twice to understand.
  • Being intentionally deceptive is close to a dealbreaker for me, but it doesn't look to me like Holly was being intentionally deceptive.
  • I thought you both could've handled the exchange better. Holly included misleading messaging in the protest and didn't seem to understand the problem, and you did not communicate clearly and then continued to believe that you had communicated well in spite of contrary evidence. Reading the exchange weakly decreased my evaluation of both your work and PauseAI US's, but not by enough to change my org ranking. You both made the sorts of mistakes that I don't think anyone can avoid 100% of the time. (I have certainly made similar mistakes.) Making a mistake once is evidence that you'll make it more, but not very strong evidence.

I re-read your post and its comments just now and I didn't have any new thoughts. I feel like I still don't have great clarity on the implications of the situation, which troubles me, but by my reading, it's just not as big a deal as you think it is.

General comments:

  • I think PauseAI US is less competent than some hypothetical alternative protest org that wouldn't have made this mistake, but I also think it's more competent than most protest orgs that could exist (or protest orgs in other cause areas).
  • I reviewed PauseAI's other materials, although not deeply or comprehensively, and they seemed good to me. I listened to a podcast with Holly and my impression was that she had an unusually clear picture of the concerns around misaligned AI.

I believe the "consciousness requires having a self-model" is the only coherent model for rejecting animals' moral patienthood, but I don't understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.

+1 for doing a Fermi estimate, I would like to see more of those.

I looked thru the congressional commission report's list of testimonies for plausibly EA-adjacent people. The only EA-adjacent org I saw was CSET, which had two testimonies (1, 2). From a brief skim, neither one looked clearly pro- or anti-arms race. They seemed vaguely pro-arms race on vibes but I didn't see any claims that look like they were clearly encouraging an arms race—but like I said, I only briefly skimmed them, so I could have missed a lot.

I would like to see this. I have considerable uncertainty about whether to prioritize (longtermism-oriented) animal welfare or AI safety.

I did it in my head and I haven't tried to put it into words so take this with a grain of salt.

Pros:

  • Orgs get time to correct misconceptions.

(Actually I think that's pretty much the only pro but it's a big pro.)

Cons:

  • It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people. (There's a good chance I would have procrastinated on this and not gotten my post out until next year, which means I would have had to make my 2024 donations without publishing this writeup first.)
  • Communicating beforehand would make me overly concerned about being nice to the people I talked to, and might prevent me from saying harsh but true things because I don't want to feel mean.
  • Orgs can still respond to the post after it's published, it's not as if it's impossible for them to respond at all.

Here are some relevant EA Forum/LW posts (the comments are relevant too):

It depends. I think investing in publicly-traded stocks has a smallish effect on helping the underlying company (see Harris (2022), (Pricing Investor Impact)[https://sustainablefinancealliance.org/wp-content/uploads/2023/05/GRASFI2023_paper_1594.pdf]). I think investing in private companies is probably much worse and should be avoided.

I think that's not a reasonable position to hold but I don't know how to constructively argue against it in a short comment so I'll just register my disagreement.

Like, presumably China's values include humans existing and having mostly good experiences.

Scott's last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn't race). But I can see how a politician reading this article wouldn't see that implication.

Load more