Hide table of contents

[linkpost to Assessing the impact of quantum cryptanalysis]

I wrote this paper a few months back - it received a journal rejection because of lack of topic fit. I do not think this paper is important enough to spend more time chasing a publication, but others might benefit from it being public and I would still benefit from feedback for learning purposes; hence this post. Let me know in a comment or through a private message if you know of a publication venue which would be a better fit.

I reproduce the conclusion as a short summary of the paper. The rest of the paper is available here.

Conclusion

Quantum computing will render modern public key cryptography standards insecure. This risk is well-known, and as a result many reputable, well-connected and well-funded organizations are working on developing quantum secure standards for the future, eg the National Institute of Standards and Technology.

Several credible proposals for post-quantum classical cryptography already exist and are being actively researched. Another considered avenue is quantum cryptography, but as an alternative it has many shortcomings.

Should the efforts to develop post quantum public key cryptography fail, there are some theoretical arguments that indicate that we would be able to substitute its functionality by symmetric-key based standards and a network of private certificates. This alternative would incur in some efficiency overhead and security trade-offs.

Even if this more speculative approach does not work out, a more rudimentary system where people exchange keys physically would be, albeit inconvenient, potentially feasible to maintain some key applications such as secure online transactions.

All in all, given the existing attention and limited downside I would recommend against prioritizing research on mitigating the effects of quantum cryptanalysis as a focus area for public officials and philanthropists, beyond supporting the existing organizations working on post quantum cryptography and supporting existing cryptanalyst experts so they can conduce further security analysis of current post quantum cryptography candidates.

Read more

Comments10


Sorted by Click to highlight new comments since:

This is a great document! I agree with the conclusions, though there are a couple factors not mentioned which seem important:

On the positive side, Google has already deployed post-quantum schemes as a test, and I believe the test was successful (https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html). This was explicitly just a test and not intended as a standardization proposal, but it's good to see that it's practical to layer a post-quantum scheme on top of an existing scheme in a deployed system. I do think if we needed to do this quickly it would happen; the example of Google and Apple working together to get contact tracing working seems relevant.

On the negative side, there may be significant economic costs due to public key schemes deployed "at rest" which are impossible to change after the fact. This includes any encrypted communication that has been stored by an adversary across the time when we switch from pre-quantum to post-quantum, and also includes slow-to-build up applications like PGP webs of trust which are hard to quickly swap out. I don't think this changes the overall conclusions, since I'd expect the going-forwards cost to be larger, but it's worth mentioning.

Thank you so much for your kind words and juicy feedback!

Google has already deployed post-quantum schemes as a test

I did not know about this, and this actually updates me on how much overhead will be needed for post quantum crypto (the NIST expert I interviewed gave me an impression that it was large and essentially would need specialized hardware to meet performance expectations, but this seems to speak to the contrary (?))

here may be significant economic costs due to public key schemes deployed "at rest"

To make sure I understand your point, let me try to paraphase. You are pointing out that:

1) past communications that are recorded will be rendered insecure by quantum computing

2) there are some transition costs associated with post quantum crypto - which are related to for example the cost of rebuilding PGP certificate networks.

If so, I agree that this is a relevant consideration but does not change the bottom line.

Thank you again for reading my paper!

Yep, that’s the right interpretation.

In terms of hardware, I don’t know how Chrome did it, but at least on fully capable hardware (mobile CPUs and above) you can often bitslice to make almost any circuit efficient if it has to be evaluated in parallel. So my prior is that quite general things don’t need new hardware if one is sufficiently motivated, and would want to see the detailed reasoning before believing you can’t do it with existing machines.

Metaculus: Will quantum computing "supremacy” be achieved by 2025? [prediction closed on Jun 1, 2018.]

While I find it plausible that it will happen, I'm not personally convinced that quantum computers will be practically very useful due the difficulties in scaling them up.

Note that we believe that quantum supremacy has already been achieved.

As in, the quantum computer Sycamore from Google is capable of solving a (toy) problem that we currently believe unfeasible in a classical computer.

Of course, there is a more interesting question of when will we be able to solve practical problems using quantum computing. Experts believe that the median for a practical attack on modern crypto is ~2035.

I regardless believe that outside (and arguably within) quantum cryptanalysis the applications will be fairly limited.

The paper in my post goes in more detail about this.

I regardless believe that outside (and arguably within) quantum cryptanalysis the applications will be fairly limited.

I might be confused, but did we agree that the most useful application of quantum computing would be on chemistry and material science? I thought so, but the above sentence seems to say otherwise...

I think we broadly agree.

I believe that chemistry and material science are two applications where quantum computing might be a useful tool, since simulating very simple physical systems is something where a quantum computer excels at but arguably significantly slower to do in a classical computer.

On the other hand, people more versed on material science and chemistry I talked to seemed to believe that (1) classical approximations will be good enough to approach problems in these areas and (2) in silico design is not a huge bottleneck anyway.

So I am open to a quantum computing revolution in chemistry and material science, but moderately skeptical.


Summarizing my current beliefs about how important quantum computing will be for future applications:

  • Cryptoanalysis => very important for solving a handful of problems relevant for modern security, with no plausible alternative
  • Chemistry and material science => Plausibly useful, not revolutionary.
  • AI and optimization => unlikely to be useful, huge constraints to overcome
  • Biology and medicine => not useful, systems too complex to model

Yeah one strong reason to believe in your own judgement over that of prediction markets/prediction engine medians is if you think you have additional important additional information that the community was not able to update on. In this case, the question was closed in mid-2018 and the paper came out in 2019.

Thanks Linch; I actually missed that the prediction had closed!

Yeah the Metaculus UI is not the most intuitive, I should flag this at some point.

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 4m read
 · 
This post presents the executive summary from Giving What We Can’s impact evaluation for the 2023–2024 period. At the end of this post we share links to more information, including the full report and working sheet for this evaluation. We look forward to your questions and comments! This report estimates Giving What We Can’s (GWWC’s) impact over the 2023–2024 period, expressed in terms of our giving multiplier — the donations GWWC caused to go to highly effective charities per dollar we spent. We also estimate various inputs and related metrics, including the lifetime donations of an average 🔸10% pledger, and the current value attributable to GWWC and its partners for an average 🔸10% Pledge and 🔹Trial Pledge.  Our best-guess estimate of GWWC’s giving multiplier for 2023–2024 was 6x, implying that for the average $1 we spent on our operations, we caused $6 of value to go to highly effective charities or funds.  While this is arguably a strong multiplier, readers may wonder why this figure is substantially lower than the giving multiplier estimate in our 2020–2022 evaluation, which was 30x. In short, this mostly reflects slower pledge growth (~40% lower in annualised terms) and increased costs (~2.5x higher in annualised terms) in the 2023–2024 period. The increased costs — and the associated reduction in our giving multiplier — were partly due to one-off costs related to GWWC’s spin-out. They also reflect deliberate investments in growth and the diminishing marginal returns of this spending. We believe the slower pledge growth partly reflects slower growth in the broader effective altruism movement during this period, and in part that GWWC has only started shifting its strategy towards a focus on pledge growth since early 2024. We’ve started seeing some of this pay off in 2024 with about 900 new 🔸10% Pledges compared to about 600 in 2023.  All in all, as we ramp up our new strategy and our investments start to pay off, we aim and expect to sustain a strong (a
 ·  · 6m read
 · 
TLDR: This 6 million dollar Technical Support Unit grant doesn’t seem to fit GiveWell’s ethos and mission, and I don’t think the grant has high expected value. Disclaimer: Despite my concerns I still think this grant is likely better than 80% of Global Health grants out there. GiveWell are my favourite donor, and given how much thought, research, and passion goes into every grant they give, I’m quite likely to be wrong here!   What makes GiveWell Special? I love to tell people what makes GiveWell special. I giddily share how they rigorously select the most cost-effective charities with the best evidence-base. GiveWell charities almost certainly save lives at low cost – you can bank on it. There’s almost no other org in the world where you can be pretty sure every few thousand dollars donated be savin’ dem lives. So GiveWell Gives you certainty – at least as much as possible. However this grant supports a high-risk intervention with a poor evidence base. There are decent arguments for moonshot grants which try and shift the needle high up in a health system, but this “meta-level”, “weak evidence”, “hits-based” approach feels more Open-Phil than GiveWell[1]. If a friend asks me to justify the last 10 grants GiveWell made based on their mission and process, I’ll grin and gladly explain. I couldn’t explain this one. Although I prefer GiveWell’s “nearly sure” approach[2], it could be healthy to have two organisations with different roles in the EA global Health ecosystem. GiveWell backing sure things, and OpenPhil making bets.   GiveWell vs. OpenPhil Funding Approach What is the grant? The grant is a joint venture with OpenPhil[3] which gives 6 million dollars to two generalist “BINGOs”[4] (CHAI and PATH), to provide technical support to low-income African countries. This might help them shift their health budgets from less effective causes to more effective causes, and find efficient ways to cut costs without losing impact in these leaner times. Teams of 3-5