4 karmaJoined Aug 2022


fictionalism is 99.99%,

Assuming fictionalism means functionalism ..

maybe because I'm a computer scientist, right? And the whole field of computer science is founded on the idea that computations can be done on any substrate.

Yes: what's that to do with consciousness?

It's intuitive that consciousness is multiply realisable, and functionalism delivers multiple realisability. But there is a problem that functionalism delivers too much realisability : a conscious mind could be implemented by pebbles , or by the "Blockhead". There is also the problem that MR is in no way a sufficient explanation of consciousness.

So I don't see how having carbon atoms prevents computations from explaining what's going on.

If we had an explanation of consciousness, including conscious experience, in terms of physics, we would not need to call on non physical properties to fill the explanatory gap.

If we had an explanation of consciousness, including conscious experience, in terms of computation, we would not need to call on physical properties to fill the explanatory gap

We don't have either explanation.

However, if someone claims to be the expert on physics, philosophy, decision theory, and AI, and then they turn out to be very confused about philosophy, then that is a mark against their reasoning abilities.

It's more effective to show they are confused about maths, physics and AI, since it is much easier to establish truth/consensus in those fields.

"Orthogonality thesis: Intelligence can be directed toward any compact goal….

Instrumental convergence: An AI doesn’t need to specifically hate you to hurt you; a paperclip maximizer doesn’t hate you but you’re made out of atoms that it can use to make paperclips, so leaving you alive represents an opportunity cost and a number of foregone paperclips….

Rapid capability gain and large capability differences: Under scenarios seeming more plausible than not, there’s the possibility of AIs gaining in capability very rapidly, achieving large absolute differences of capability, or some mixture of the two….

1-3 in combination imply that Unfriendly AI is a critical problem-to-be-solved, because AGI is not automatically nice, by default does things we regard as harmful, and will have avenues leading up to great intelligence and power.”"


1-3 in combination don't imply anything with high probability.

"Maybe (3) is a little unfair, or sounds harsher than I meant it. It's a bit unclear to me how seriously to take Aaronson's quote. It seems like plenty of physicists have looked through the sequences to find glaring flaws, and basically found none (physics stackexchange). T"


Here's a couple: he conflates Copenhagen and Objective collapse throughout. 


He fails to distinguish Everettian and Decoherence based MWI.

"which is that Eliezer's arguments were good,"


There is plenty of evidence against that. His arguments on other subjects aren't good (see OP), his arguments on AI aren't informed by academic expertise or industry experience, his predictions are bad,etc.

"I don't fault 21-year-old Eliezer for trying (except insofar as he was totally wrong about the probability of Unfriendly AI at the time!), because the best way to learn that a weird new path is unviable is often to just take a stab at it"

It was only weird in that involved technologies and methods that were unlikely to work, and EY could have figured that out theoretically by learning more about AI and software development. 

"Nobody who has been talking about these topics for 20+ years has a similarly good track record."


Really? We know EY made a bunch of mispredictions "A certain teenaged futurist, who, for example, said in 1999, "The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015." What are his good predictions? I can't see a single example in this thread.

If EY gets to disavow his mistakes, so does everyone else.

The argument is basically saying that if X can't be explained by physicalism, then X is an illusion. That's treating physicalism as unfalsifable.

The remark.about Everett branches rather gives the game away. Decision theories rest on assumptions about the nature of the universe and of the decider, so trying to formulate a DT that will work.perfectly in any universe is hopeles.

Load more