HD

Harrison Durland

1836 karmaJoined Sep 2020

Posts
20

Sorted by New

Comments
453

Topic contributions
10

I probably should have been more clear, my true "final" paper actually didn't focus on this aspect of the model: the offense-defense balance was the original motivation/purpose of my cyber model, but I eventually became far more interested in using the model to test how large language models could improve agent-based modeling by controlling actors in the simulation. I have a final model writeup which explains some of the modeling choices in more detail and talks about the original offense/defense purpose in more detail.

(I could also provide the model code which is written in Python and, last I checked, runs fine, but I don't expect people would find it to be that valuable unless they really want to dig into this further, especially given that it might have bugs.)

If offence and defence both get faster, but all the relative speeds stay the same, I don’t see how that in itself favours offence

Funny you should say this, it so happens that I just submitted a final paper last night for an agent-based model which was meant to test exactly this kind of claim for the impacts of improving “technology” (AI) in cybersecurity. Granted, the model was extremely simple + incomplete, but the theoretical results explain how this could possible.

In short, when assuming a fixed number of vulnerabilities in an attack surface, while attackers’ and defenders’ budgets are very small there may be many more vulnerabilities that go unnoticed. For example, suppose they together can only explore 10% of the attack surface, but vulnerabilities are only in 1% of the surface. Thus, even if atk/def budgets increase by the same factor (e.g., 10x), it increases the likelihood that vulnerabilities are found either by the attacker or defender.

The following results are admittedly not very reliable (I didn’t do any formal verification/validation beyond spot checks), but the point of showing these graphs is not “here are the definitive numbers” but more an illustrative “here is what the pattern of relationships between attack surface, atk/def budgets, and theft rate could look like”.

 

Notice how as the attack surface increases the impact of multiplying the attackers and defenders’ budgets causes more convergence. With a hypothetical 1x1 attack surface (grid) for each actor, the budget multiplication should have no effect on loss rates, because all vulnerabilities are found and it’s just a matter of who found them first, which is not affected by budget multiplication. However, with a hypothetical infinite by infinite grid, the multiplication of budgets strictly benefits the attacker, because the defenders’ will ~never check the same squares that the attacker checks.

(ultimately my model makes many unrealistic assumptions and may have had bugs, but this seemed like a decent intuition seed—not a true “conclusion” which can be carelessly applied elsewhere.)

Thank you so much for articulating a bunch of the points I was going to make!

I would probably just further drive home the last paragraph: it’s really obvious that the “number of people a lone maniac can kill in given time” (in America) has skyrocketed with the development of high fire-rate weapons (let alone knowledge of explosives). It could be true that the O/D balance for states doesn’t change (I disagree) while the O/D balance for individuals skyrockets.

I have increasingly become open to incorporating alternative decision theories as I recognize that I cannot be entirely certain in expected value approaches, which means that (per expected value!) I probably should not solely rely on one approach. At the same time, I am still not convinced that there is a clear, good alternative, and I also repeatedly find that the arguments against using EV are not compelling (e.g., due to ignoring more sophisticated ways of applying EV).

Having grappled with the problem of EV-fanaticism for a long time in part due to the wild norms of competitive policy debate (e.g., here, here, and here), I've thought a lot about this, and I've written many comments on the forum about this. My expectation is that this comment won't gain sufficient attention/interest to warrant me going through and collecting all of those instances, but my short summary is something like:

  • Fight EV fire with EV fire: Countervailing outcomes—e.g., the risk that doing X has a negative 999999999... effect—are extremely important when dealing with highly speculative estimates. Sure, someone could argue that if you don't give $20 to the random guy wearing a tinfoil hat and holding a remote which he will use to destroy 3^3^3 galaxies there's at least a 0.000000...00001% chance he's telling the truth, but there's also a decent chance that doing this could have the opposite effect due to some (perhaps hard-to-identify) alternative effect.
  • One should probably distinguish between extremely low (e.g., 0.00001%) estimates which are the result of well-understood or ""objective""[1] analyses which you expect cannot be improved by further analysis or information collection (e.g., you can directly see/show the probability written in a computer program, a series of coin flips with a fair coin) vs. such estimates that are the result of very subjective estimates probability estimates that you expect you will likely adjust downwards due to further analysis, but where you just can't immediately rule out some sliver of uncertainty.[2]
    • Often you should recognize that when you get into small probability spaces for ""subjective"" questions, you are at a very high risk of being swayed by random noise or deliberate bias in argument/information selection—for example, if you've never thought about how nano-tech could cause extinction and listen to someone who gives you a sample of arguments/information in favor of the risks, you likely will not immediately know the counterarguments and you should update downwards based on the expectation that the sample you are exposed to is probably an exaggeration of the underlying evidence.
    • The cognitive/time costs of doing ""subjective"" analyses likely imposes high opportunity costs (going back to the first point);
    • When your analysis is not legible to other people, you risk high reputational costs (again, which goes back to the first point).
  • Based on the above, I agree that in some cases it may be a far more efficient heuristic for decision-making under analytical constraints to use heuristics like trimming off highly-""subjective"" risk estimates. However, I make this claim based on EV with the recognition that it is still a better general-purpose decision-making algorithm, but which may just not be optimized for application under realistic constraints (e.g., other people not being familiar with your method of thinking, short amount of time for discussion or research, error-prone brains which do not reliably handle lots of considerations and small numbers).[3]
  1. ^

    I dislike using "objective" and "subjective" to make these distinctions, but for simplicity's sake / for lack of a better alternative at the moment, I will use them.

  2. ^
  3. ^

    I advocate for something like this competitive policy debate, since "fighting EV fire with EV fire" risks "burning the discussion"—including the educational value, reputation of participants, etc. But most deliberations do not have to be made within the artificial constraints of competitive policy debate.

Since I think substantial AI regulation will likely occur by default, I urge effective altruists to focus more on ensuring that the regulation is thoughtful and well-targeted rather than ensuring that regulation happens at all.

I think it would be fairly valuable to see a list of case studies or otherwise create base rates for arguments like “We’re seeing lots of political gesturing and talking, so this suggests real action will happen soon.” I am still worried that the action will get delayed, watered down, and/or diverted to less-existential risks, only for the government to move on to the next crisis. But I agree that the past few weeks should be an update for many of the “government won’t do anything (useful)” pessimists (e.g., Nate Soares).

I definitely would have preferred a TLDR or summary at the top, not the bottom. However, I definitely appreciate your investigation into this, as I have long loathed Eliezer’s use of the term once I realized he just made it up.

Strange, unless the original comment from Gerald has been edited since I responded I think I must have misread most of the comment, as I thought it was making a different point (i.e., "could someone explain how misalignment could happen"). I was tired and distracted when I read it, so it wouldn't be surprising. However, the final paragraph in the comment (which I originally thought was reflected in the rest of the comment) still seems out of place and arrogant.

This is a test regarding comment edit history. This comment has been edited post-publication.

This really isn’t the right post for most of those issues/questions, and most of what you mentioned are things you should be able to find via searches on the forum, searches via Google, or maybe even just asking ChatGPT to explain it to you (maybe!). TBH your comment also just comes across quite abrasive and arrogant (especially the last paragraph), without actually appearing to be that insightful/thoughtful. But I’m not going to get into an argument on these issues.

[This comment is no longer endorsed by its author]Reply

I wish! I’ve been recommending this for a while but nobody bites, and usually (always?) without explanation. I often don’t take seriously many of these attempts at “debate series” if they’re not going to address some of the basic failure modes that competitive debate addresses, e.g., recording notes in a legible/explorable way to avoid the problem of arguments getting lost under layers of argument branches.

Load more