R

RobertM

Software Engineer @ Lightcone Infrastructure
1010 karmaJoined Working (6-15 years)

Bio

LessWrong dev & admin as of July 5th, 2022.

Comments
82

Topic contributions
5

Yes, indeed, there was only an attempt to hide the post three weeks ago.  I regret the sloppiness in the details of my accusation.

The other false accusation was that I didn't cite any sources

I did not say that you did not cite any sources.  Perhaps the thing I said was confusingly worded?  You did not include any links to any of the incidents that you describe.

My bad, it was moved back to draft on October 3rd (~3 weeks ago) by you.  I copied the link from another post that linked to it.

It's really quite something that you wrote almost 2000 words and didn't include a single primary citation to support any of those claims.  Even given that most of them are transparently false to anyone who's spent 5 minutes reading either LW or the EA Forum, I think I'd be able to dig up something superficially plausible with which to smear them.

And if anyone is curious about why Yarrow might have an axe to grind, they're welcome to examine this post, along with the associated comment thread.

Edit: changed the link to an archive.org copy, since the post was moved to draft after I posted this.

Edit2: I was incorrect about when it was moved back to a draft, see this comment.

I’m probably a bit less “aligned ASI is literally all that matters for making the future go well” pilled than you, but it’s definitely a big part of it. 

Sure, but the vibe I get from this post is that Will believes in that a lot less than me, and the reasons he cares about those things don't primarily route through the totalizing view of ASI's future impact.  Again, I could be wrong or confused about Will's beliefs here, but I have a hard time squaring the way this post is written with the idea that he intended to communicate that people should work on those things because they're the best ways to marginally improve our odds of getting an aligned ASI.  Part of this is the list of things he chose, part of it is the framing of them as being distinct cause areas from "AI safety" - from my perspective, many of those areas already have at least a few people working on them under the label of "AI safety"/"AI x-risk reduction".

Like, Lightcone has previously and continues to work on "AI for better reasoning, decision-making and coordination".  I can't claim to speak for the entire org but when I'm doing that kind of work, I'm not trying to move the needle on how good the world ends up being conditional on us making it through, but on how likely we are to make it through at all.  I don't have that much probability mass on "we lose >10% but less than 99.99% of value in the lightcone"[1].

Edit: a brief discussion with Drake Thomas convinced me that 99.99% is probably a pretty crazy bound to have; let's say 90%.  Wqueezing out that extra 10% involves work that you'd probably describe as "macrostrategy", but that's a pretty broad label.

  1. ^

    I haven't considered the numbers here very carefully.

I don't think that - animal welfare post-ASI is a subset of "s-risks post-ASI".

If you’ve got a very high probability of AI takeover (obligatory reference!), then my first two arguments, at least, might seem very weak because essentially the only thing that matters is reducing the risk of AI takeover.

I do think the risk of AI takeover is much higher than you do, but I don't think that's why I disagree with the arguments for more heavily prioritizing the list of (example) cause areas that you outline.  Rather, it's a belief that's slightly upstream of my concerns about takeover risk - that the advent of ASI almost necessarily[1] implies that we will no longer have our hands on the wheel, so to speak, whether for good or ill.

An unfortunate consequence of having beliefs like I do about what a future with ASI in it involves is that those beliefs are pretty totalizing.  They do suggest that "making the transition to a post-ASI world go well" is of paramount importance (putting aside questions of takeover risk).  They do not suggest that it would be useful for me to think about most of the listed examples, except insofar as they feed into somehow getting a friendly ASI rather than something else.  There are some exceptions: for example, if you have much lower odds of AI takeover than I do, but still expect ASI to have this kind of totalizing effect on the future, I claim you should find it valuable for some people to work on "animal welfare post-ASI", and whether there is anything that can meaningfully be done pre-ASI to reduce the risk of animal torture continuing into the far future[2].  But many of the other listed concerns seem very unlikely to matter post-ASI, and I don't get the impression that you think we should be working on AI character or preserving democracy as instrumental paths by which we reduce the risk of AI takeover, bad/mediocre value lock-in, etc, but because you consider things like that to be important separate from traditional "AI risk" concerns.  Perhaps I'm misunderstanding?

  1. ^

    Asserted without argument, though many words have been spilled on this question in the past.

  2. ^

    It is perhaps not a coincidence that I expect this work to initially look like "do philosophy", i.e. trying to figure out whether traditional proposals like CEV would permit extremely bad outcomes, looking for better alternatives, etc.

I agree that titotal should probably not have used the word "bad" but disagree with the reasoning.  The problem is that "bad" is extremely nonspecific; it doesn't tell the reader what titotal thinks is wrong with their models, even at a very low resolution.  They're just "bad".  There are other words that might have been informative, if used instead.

Of course, if he had multiple different kinds of issues with their models, he might have decided to omit adjectives from the title for brevity's sake and simply explained the issues in the post.  But if he thought the main issue was (for the sake of argument) that the models were misleading, then I think it would be fine for him to say that in the title.

If I'm understanding your question correctly, that part of my expectation is almost entirely conditional on being in a post-ASI world.  Before then, if interest in (effectively) reducing animal suffering stays roughly the size of "EA", then I don't particularly expect it to be become cheap enough to subsize people farming animals to raise them in humane conditions.  (This expectation becomes weaker with longer AI timelines, but I haven't though that hard about what the world looks like in 20+ years without strong AI, and how that affects the marginal cost of various farmed animal welfare interventions.)

So my timelines on that are pretty much just my AI timelines, conditioned on "we don't all die" (which are shifted a bit longer than my overall AI timelines, but not by that much).

Yudkowsky explicitly repudiates his writing from 2001 or earlier:

You should regard anything from 2001 or earlier as having been written by a different person who also happens to be named “Eliezer Yudkowsky”. I do not share his opinions.

 

that is crying wolf today

This is assuming the conclusion, of course.

Load more