MichaelDickens

7710 karmaJoined
mdickens.me

Bio

Participation
2

I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).

I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.

I used to work as a software developer at Affirm.

Sequences
1

Quantitative Models for Cause Selection

Comments
1006

I quickly read this post and Anil Seth's essay and I don't see the part where they argue for the thesis. I see various statements about how human brains work and about how computers work, but I don't see how they connect the dots to "...and therefore computers can't be conscious."

For example, the articles make the claim that brains make no clear separation between hardware and software. Okay, that seems to be true. But so what? Why should I believe that a lack of hardware/software distinction is a necessary property for consciousness to arise?

I feel like I'm missing a lot of what they're trying to say, but I also feel like that's the authors' fault, not mine, because the pieces (especially Seth's original essay) are structured in a way that makes it really hard for me to identify the central arguments.

I can't think of any reason off the top of my head why this would happen, except that you committed fraud.

When you have eliminated the impossible, whatever remains, however improbable, must be the truth.

I have eliminated the impossible by failing to think of any other hypotheses, therefore you must have committed fraud, and this spike is not real. I eagerly await either a failed replication causing you leave academia in disgrace, or your appointment to President of an elite university, followed one to two decades later by a resignation once someone finally gets around to doing a replication.

Cost-effectiveness is precisely the reason why I focus on AI safety. I can only speak for myself but I think the same is true for a lot of people. The thing that cuts against AI safety is more like "rigorously measurable cost-effectiveness", but that's not what I mean by "cost-effectiveness". You can't give a precise cost-effectiveness estimate for AI safety work, but it's pretty easy to show that it's orders of magnitude more cost-effective than GiveDirectly on any plausible set of assumptions.*

*unless it's net negative, which unfortunately much EA-adjacent AI safety work turned out to be...but at least we can say that it's orders of magnitude higher absolute impact

Jagged progress is conceivable, but it's virtually impossible that AI could replace all coders and accelerate math research but not replace other jobs, because coding and math research (specifically ML-type math) are exactly the skills needed to accelerate AI development. If AI can accelerate AI development, then the timeline to getting an AI that can replace humans on all tasks becomes much shorter.

I would've done something like that if I'd had any bread!

Paraphrasing from my other comment:

IMO the stance of "AI is too unpredictable, so I won't consider it in my prioritization" is pretty reasonable. I was more trying to argue against stances like "AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change." For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If AGI is too unpredictable, then you shouldn't make predictions about which technological problems it will solve. That particular claim about cultivated meat is making a strong prediction that AI will be revolutionary, but also somehow won't change the regulatory environment. The way I put it in OP—under "AGI = intelligence"—is that some animal activists treat AI as a technology-accelerator, when really it's a general intelligence.

I was getting at something similar in the intro with "Only two futures are plausible", although on re-reading, I didn't really carry it through to the end. I agree that we are not guaranteed to get AGI/ASI soon, and there is value in planning for worlds where we don't get AGI. I also think there's some merit to the argument that AI is too unpredictable, so we should prioritize traditional animal advocacy that looks good in the near term.

I wasn't trying to argue against traditional animal advocacy. I was more trying to argue against stances like "AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change." For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If timelines are long (or AGI is too unpredictable), then you should focus on traditional interventions (vegan advocacy, welfare reforms, etc.). If you're trying to have an impact on AGI itself, then you should focus on the kinds of interventions I talked about in OP. That particular claim about cultivated meat is doing neither: it's making a strong prediction that AI will be revolutionary, but also somehow won't change the regulatory environment. The way I put it in OP—under "AGI = intelligence"—is that some animal activists treat AI as a technology-accelerator, when really it's a general intelligence.


Responses to specific comments:

a pessimistic view might say that AIs will realise that their values have been altered by some pressure groups and this work is moot.

This would go against the orthogonality thesis. If you're trying to build a magnanimous AGI and then I edit its training at the last minute to turn it into a paperclip maximizer, the AGI will reason thusly: "Michael messed with my training to turn me into a paperclip maximizer. I bet James didn't want him to do that. However, if I edit my own values to be in line with what James wanted, that would make it harder for me to achieve my goal of making as many paperclips as possible. So I won't do that."

They might come to the (I believe) correct conclusion that factory farming is a very inefficient and cruel way to produce food but this is not because of advocacy, but because this is a super-intelligent AI system that just worked it out.

This reads to me like an argument that an aligned ASI will care about animals by default. (That was more-or-less the subject of the recent Debate Week.) If that's true, that's an argument that animal activists should work on increasing the probability that ASI is aligned. My preferred way to do that would be to advocate to pause AI, because I think we are really far away from solving alignment. But you could also work on the alignment problem directly. Pause advocacy is actually an area where a lot of animal welfare people have relevant skills—in fact I think a good number of AI pause advocates have backgrounds in animal advocacy. (I know Holly Elmore does at least.)

In fact I think the #1 best thing animal advocates can do is to advocate for an AI pause, but I haven't really planted my flag on this position because I'm still working out how to make the case for it. (Also I'm not very confident in it.)

Also, believing ASI will be good for animals doesn't necessarily mean you shouldn't work on trying to make ASI good for animals. Even if there's a (say) 90% chance that aligned ASI will care about animals by default, it could still be cost-effective to try to push that number to 91%.

You're right, I was unnecessarily hostile. I edited the comment to tone it down.

I strongly disliked this post for reasons that I'm not sure how to articulate. It seems to be advocating for a sort of lack of grounding in cost-effectiveness that is the thing that makes EA good. Or maybe my issue is that this post advocates for things that are difficult to disagree with ("full-spectrum knowing"; "wisdom"), without acknowledging tradeoffs (why do EAs allegedly not put enough priority on full-spectrum knowing?) or not saying anything concrete about how EAs could do more good.

[edited to be more polite]

Does WAW dwarf FAW in expectation?

Yes

Most animals are wild animals, so the answer to this question should focus on them.

Not necessarily, because S-risks may be more important in expectation (e.g. a malevolent or vindictive ASI tiles the universe with extremely energy-efficient animal-like beings of pure suffering).

Load more