S

Sharmake

1048 karmaJoined

Comments
343

Topic contributions
2

Sharmake
2
0
0
50% agree

The "arbitrariness" of precise EVs is just a matter of our discomfort with picking a precise number (see above).

 

A non-trivial reason for this is that precise numbers expose ideological assumptions, and a whole of people do not like this.

It's easy to lie with numbers, but it's even easier to lie without a number.

Crossposting a comment from LessWrong:

@1a3orn goes deeper into another dynamic that causes groups to have false beliefs while believing they are true, and it's the fact that some bullshit beliefs help you figure out who to exclude, which is the people who don't currently hold the belief, and in particular assholery also helps people who don't want their claims checked, and it's a reason I think politeness is actually useful in practice for rationality:

(Sharmake's first tweet): I wrote something on a general version of this selection effect, and why it's so hard to evaluate surprising/extreme claims relative to your beliefs, and it's even harder if we expect heavy-tailed performance, as happens in our universe.

(1a3orn's claims) This is good. I think another important aspect of the multi-stage dynamic here is that it predicts that movements with *worse* stages at some point have fewer contrary arguments at later points...

...and in this respect is like an advance-fee scam, where deliberately non-credible aspects of the story help filter people early on so that only people apt to buy-in reach later parts.

Paper on Why do Nigerian Scammers Say They are from Nigeria? 

So it might be adaptive (survivalwise) for a memeplex to have some bullshit beliefs because the filtering effect of these means that there will be fewer refutations of the rest of the beliefs.

It can also be adaptive (survivalwise) for a leader of some belief system to be abrasive, an asshole, etc, because fewer people will bother reading them => "wow look how no one can refute my arguments"

(Sharmake's response) I didn't cover the case where the belief structure is set up as a scam, and instead focused on where even if we are assuming LWers are trying to get at truth and aren't adversarial, the very fact that this effect exists combined with heavy-tails makes it hard to evaluate claims.

But good points anyway.

(1a3orn's final point) 

Yeah tbc, I think that if you just blindly run natural selection over belief systems, you get belief systems shaped like this regardless of the intentions of the people inside it. It's just an effective structure.

Quotes from this tweet thread.
 

Another story is that this is a standard diminishing returns case, and once we have removed all the very big blockers like non-functional rule of law, property rights, untreated food and water, as well as disease, it's very hard to make the people who would still remain poor actually improve their lives, because all the easy wins have been taken, so what we are left with is the harder/near impossible poverty cases.

I think each galactic x-risk on the list can probably be disregarded, but combined, and with the knowledge that we are extremely early in thinking about this, they present a very convincing case to me that at least 1 or 2 galactic x-risks are possible.

I think this is kind of a crux, in that I currently think the only possible galactic scale risks are risks where our standard model of physics breaks down in a deep way once you can get at least one dyson swarm going up, you are virtually invulnerable to extinction methods that doesn't involve us being very wrong about physics.

This is always a tail risk of interstellar travel, but I would not say that interstellar travel will probably doom the long-term future as stated in the title.

The better title is interstellar travel poses unacknowledged tail risks.

Really interesting point, and probably a key consideration on existential security for a spacefaring civilisation. I'm not sure if we can be confident enough in acausal trade to rely on it for our long-term existential security though. I can't imagine human civilisation engaging in acausal trade if we expanded before the development of superintelligence. There are definitely some tricky questions to answer about what we should expect other spacefaring civilisations to do. I think there's also a good argument for expecting them to systematically eliminate other spacefaring civilisations rather than engage in acausal trade.

I agree that if there's an X-risk that isn't defendable (for the sake of argument), then acausal trade is reliant on every other civilization choosing to acausally trade in a manner where the parent civilization can prevent x-risk, but the good news is that a lot of the more plausible (in a relative sense) x-risks have a light-speed limit, meaning that given we are probably alone in the observable universe (via the logic of dissolving the fermi paradox), means that humanity only really has to do acausal trade.

And a key worldview crux is conditioning on humanity becoming a spacefaring civilization, I expect superintelligence that takes over the world to come first, because it's much easier to develop good enough AI tech to develop space sufficiently than it is for humans to go spacefaring alone.

And AI progress is likely to be fast enough such that there's very little time for rogue spacefarers to get outside of the parent civilization's control.

The dissolving the fermi paradox paper is here:

https://arxiv.org/abs/1806.02404

On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.

I think liberalism is unfortunately on a timer that will almost certainly expire pretty soon, no matter what we do.

We either technologically regress due to the human population falling and more anti-democratic civilizations winning outright due to the zero/negative sum games being played, or we create AIs that replace us and due to the incentives plus the sheer difference in power, that AIs by default create something closer to a dictatorship for humans, and in particular value alignment is absolutely critical in the long run for AIs that can take every human job.

Modern civilization is not stable at all.

Acausal trade/cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.

Yeah, assuming no FTL, acausal trade/cooperation is necessary if you want anything like a unified galactic/universal polity.

Sharmake
2
0
0
50% disagree

Interstellar travel will probably doom the long-term future

 

A lot of the reason for my disagreement stems from thinking that most galactic-scale disasters either don't actually serve as x-risks (like the von Neumann probe scenario), because they are defendable, or they require some shaky premises about physics to come true.

The change the universe constants is an example.

Also, in most modern theories of time travel, you only get self-consistent outcomes, and a lot of the classic portrayals of using time travel to destroy the universe through paradoxical inputs wouldn't work, because only self-consistent outcomes are allowed, and would almost certainly be prevented beforehand.

The biggest uncertainty here is how much acausal trade lets us substitute for the vast distances that make traditional causal governance impossible.

For those unaware of acausal trade, it's basically replacing direct communication for predicting what each other wants, and if you have the ability to do vast amounts of simulations, you can get very, very good predictive models of what the other wants such that both of you can trade without requiring any communication, which is necessary for realistic galactic empires/singletons to exist:

https://www.lesswrong.com/w/acausal-trade

I don't have much of an opinion on the question, but if it's true that acausal trade can basically substitute wholly for communication that is traditionally necessary to suppress rebellions in empires, then most galactic/universe scale risks are pretty easily avoidable because we don't have to roll the dice on every civilization trying to do it's own research that may lead to x-risk.

The main unremovable advantages of AIs over humans will probably be in the following 2 areas:

  1. A serial speed advantage, from 50-1000x, with my median in the 100-500x speed advantage range, and more generally the ability to run slower or faster to do more work proportionally, albeit there are tradeoffs at either extreme of either running slow or fast.

  2. The ability for compute/software improvements to directly convert into more researchers with essentially 0 serial time necessary, unlike basically all of reproduction (about the only cases where it even gets close are the days/hours doubling time of flies and some bacteria/viruses, but these are doing much simpler jobs and it's uncertain whether you could add more compute/learning capability without slowing down their doubling time.)

This is the mechanism by which you can get way more AI researchers very fast, while human researchers don't increase proportionally.

Humans probably do benefit assuming AI is useful enough to automate say AI research away, but these 2 unremovable limitations fundamentally prevent anything like an explosion in research, unlike AI research.

I'm trying to identify why the trend has lasted, so that we can predict when the trend will break down.

That was the purpose of my comment.

Sharmake
2
0
0
100% disagree

Consequentialists should be strong longtermists

 

I disagree, mostly due to the should wording, as believing in consequentialism doesn't obligate you to have any particular discount rate or have any particular discount function, and these are basically free parameters, so discount rates are independent of consequentialism.

Sharmake
2
0
0
50% agree

Bioweapons are an existential risk


I'll just repeat @weeatquince's comment, since he already covered the issue better than I did:

With current technology probably not an x-risk. With future technology I don’t think we can rule out the possibility of bio-sciences reaching the point where extinction is possible. It is a very rapidly evolving field with huge potential.

Load more