A cryptography engineer's perspective on quantum computing timelines

(words.filippo.io)

413 points | by thadt 14 hours ago

38 comments

  • adrian_b 12 hours ago
    It should be noted that if indeed there has not remained much time until a usable quantum computer will become available, the priority is the deployment of FIPS 203 (ML-KEM) for the establishment of the secret session keys that are used in protocols like TLS or SSH.

    ML-KEM is intended to replace the traditional and the elliptic-curve variant of the Diffie-Hellman algorithm for creating a shared secret value.

    When FIPS 203, i.e. ML-KEM is not used, adversaries may record data transferred over the Internet and they might become able to decrypt the data after some years.

    On the other hand, there is much less urgency to replace the certificates and the digital signature methods that are used today, because in most cases it would not matter if someone would become able to forge them in the future, because they cannot go in the past to use that for authentication.

    The only exception is when there would exist some digital documents that would completely replace some traditional paper documents that have legal significance, like some documents proving ownership of something, which would be digitally signed, so forging them in the future could be useful for somebody, in which case a future-proof signing method would make sense for them.

    OpenSSH, OpenSSL and many other cryptographic libraries and applications already support FIPS 203 (ML-KEM), so it could be easily deployed, at least for private servers and clients, without also replacing the existing methods used for authentication, e.g. certificates, where using post-quantum signing methods would add a lot of overhead, due to much bigger certificates.

    • FiloSottile 12 hours ago
      That was my position until last year, and pretty much a consensus in the industry.

      What changed is that the new timeline might be so tight that (accounting for specification, rollout, and rotation time) the time to switch authentication has also come.

      ML-KEM deployment is tangentially touched on in the article because it's both uncontroversial and underway, but:

      > This is not the article I wanted to write. I’ve had a pending draft for months now explaining we should ship PQ key exchange now, but take the time we still have to adapt protocols to larger signatures, because they were all designed with the assumption that signatures are cheap. That other article is now wrong, alas: we don’t have the time if we need to be finished by 2029 instead of 2035.

      > For key exchange, the migration to ML-KEM is going well enough but: 1. Any non-PQ key exchange should now be considered a potential active compromise, worthy of warning the user like OpenSSH does, because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years. [...]

      You comment is essentially the premise of the other article.

      • adrian_b 11 hours ago
        I agree with you that one must prepare for the transition to post-quantum signatures, so that when it becomes necessary the transition can be done immediately.

        However that does not mean that the switch should really be done as soon as it is possible, because it would add unnecessary overhead.

        This could be done by distributing a set of post-quantum certificates, while continuing to allow the use of the existing certificates. When necessary, the classic certificates could be revoked immediately.

        • nextaccountic 1 minute ago
          > when it becomes necessary

          Perhaps it's already necessary, or it will be in the following months. We are hearing only about the public developments, not whatever classified work the US is doing

          I think the analogy with the Manhattan project is apt. The US has enormous interest in decrypting communication streams at scale (see Snowden and the Utah NSA datacenter), and it's known for storing encrypted comms for decrypting later. Well maybe later is now

        • snowwrestler 8 hours ago
          > I agree with you that one must prepare for the transition to post-quantum signatures, so that when it becomes necessary the transition can be done immediately.

          Personally, my reading between the lines on this subject as a non-expert is that we in the public might not know when post-quantum cryptography is necessary until quite a while after it is necessary.

          Prior to the public-key cryptography revolution, the state of the art in cryptography was locked inside state agencies. Since then, public cryptographic research has been ahead or even with state work. One obvious tell was all the attempts to force privately-operated cryptographic schemes to open doors to the government via e.g. the Clipper chip and other appeals to magical key escrow.

          A whole generation of cryptographers grew up in this world. Quantum cryptography might change things back. We know what papers say from Google and other companies. Who knows what is happening inside the NSA or military facilities?

          It seems that with quantum cryptography we are back to physics, and the government does secret physics projects really well. This paragraph really stood out to me:

          > Scott Aaronson tells us that the “clearest warning that [he] can offer in public right now about the urgency of migrating to post-quantum cryptosystems” is a vague parallel with how nuclear fission research stopped happening in public between 1939 and 1940.

          • raron 7 hours ago
            > Since then, public cryptographic research has been ahead or even with state work.

            How can we know that?

            > Who knows what is happening inside the NSA or military facilities?

            Couldn't have NSA found an issue with ML-KEM and try to convince people to use it exclusively (not in hybrid scheme with ECC)?

            • tptacek 6 hours ago
              Couldn't NSA have not known about an issue with ML-KEM, and thus wanted to prevent its commercial acceptance, which it did simply by approving the algorithm?

              What's the PQC construction you couldn't say either thing about?

            • goalieca 7 hours ago
              Follow nsa suite-b and what the USA forces on different levels of classification.
        • btilly 10 hours ago
          Planning now on a fast upgrade later, is planning on discovering all of the critical bugs after it is too late to do much about them.

          Things need to be rolled out in advance of need, so that you can get a do-again in case there proves to be a need.

        • FiloSottile 11 hours ago
          How do you do revocation or software updates securely if your current signature algorithm is compromised?
          • ekr____ 11 hours ago
            As a practical matter, revocation on the Web is handled mostly by centrally distributed revocation lists (CRLsets, CRLite, etc. [0]), so all you really need is:

            (1) A PQ-secure way of getting the CRLs to the browser vendors. (2) a PQ-secure update channel.

            Neither of these require broad scale deployment.

            However, the more serious problem is that if you have a setting where most servers do not have PQ certificates, then disabling the non-PQ certificates means that lots of servers can't do secure connections at all. This obviously causes a lot of breakage and, depending on the actual vulnerability of the non-PQ algorithms, might not be good for security either, especially if people fall back to insecure HTTP.

            See: https://educatedguesswork.org/posts/pq-emergency/ and https://www.chromium.org/Home/chromium-security/post-quantum...

            [0] The situation is worse for Apple.

            • FiloSottile 11 hours ago
              Indeed, in an open system like the WebPKI it's fine in theory to only make the central authority PQ, but then you have the ecosystem adoption issue. In a closed system, you don't have the adoption issue, but the benefit to making only the central authority PQ is likely to be a lot smaller, because it might actually be the only authority. In both cases, you need to start moving now and gain little from trying to time the switchover.
              • ekr____ 11 hours ago
                > In both cases, you need to start moving now and gain little from trying to time the switchover.

                There are a number of "you"s here, including:

                - The SDOs specifying the algorithms (IETF mostly)

                - CABF adding the algorithms to the Baseline Requirements so they can be used in the WebPKI

                - The HSM vendors adding support for the algorithms

                - CAs adding PQ roots

                - Browsers accepting them

                - Sites deploying them

                This is a very long supply line and the earlier players do indeed need to make progress. I'm less sure how helpful it is for individual sites to add PQ certificates right now. As long as clients will still accept non-PQ algorithms for those sites, there isn't much security benefit so most of what you are doing is getting some experience for when you really need it. There are obvious performance reasons not to actually have most of your handshakes use PQ certificates until you really have to.

                • FiloSottile 11 hours ago
                  Yeah, that's an audience mismatch, this article is for "us." End users of cryptography, including website operators and passkey users (https://news.ycombinator.com/item?id=47664744) can't do much right now, because "we" still need to finish our side.
                • fireflash38 7 hours ago
                  If your HSM vendor isn't actively working on/have a release date for GA PQ, you should probably get a new vendor.
    • layer8 11 hours ago
      > The only exception is when there would exist some digital documents that would completely replace some traditional paper documents that have legal significance, like some documents proving ownership of something, which would be digitally signed, so forging them in the future could be useful for somebody, in which case a future-proof signing method would make sense for them.

      This very much exists. In particular, the cryptographic timestamps that are supposed to protect against future tampering are themselves currently using RSA or EC.

  • phicoh 11 hours ago
    What surprises me is how non-linear this argument is. For a classical attack on, for example RSA, it is very easy to a factor an 8-bit composite. It is a bit harder to factor a 64-bit composite. For a 256-bit composite you need some tricky math, etc. And people did all of that. People didn't start out speculating that you can factor a 1024-bit composite and then one day out of the blue somebody did it.

    The weird thing we have right now is that quantum computers are absolutely hopeless doing anything with RSA and as far as I know, nobody even tried EC. And that state of the art has not moved much in the last decade.

    And then suddenly, in a few years there will be a quantum computer that can break all of the classical public key crypto that we have.

    This kind of stuff might happen in a completely new field. But people have been working on quantum computers for quite a while now.

    If this is easy enough that in a few years you can have a quantum computer that can break everything then people should be able to build something in a lab that breaks RSA 256. I'd like to see that before jumping to conclusions on how well this works.

    • FiloSottile 11 hours ago
      See https://bas.westerbaan.name/notes/2026/04/02/factoring.html and https://scottaaronson.blog/?p=9665#comment-2029013 which are linked to in the first section of the article.

      > Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise. As Scott Aaronson said:

      > Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”

      To summarize, the hard part of scalable quantum computation is error correction. Without it, you can't factorize essentially anything. Once you get any practical error correction, the distance between 32-bit RSA and 2048-bit RSA is small. Similarly to how the hard part is to cause a self-sustaining fissile chain reaction, and once you do making the bomb bigger is not the hard part.

      This is what the experts know, and why they tell us of the timelines they do. We'd do better not to dismiss them by being smug about our layperson's understanding of their progress curve.

      • vlovich123 1 hour ago
        I’ve worked with Bas. I respect him, but he is definitely a QC maximalist in a way. At the very least he believes that caution suggests the public err on the side of believing we will build them.

        The actual challenge is we still don’t know if we can build QC circuits that factorize faster than classical both because the amount of qubits has gone from ridiculously impossible to probably still impossible AND because we still don’t know how to build circuits that have enough qbits to break classical algorithms larger or faster than classical computers, which if you’re paying attention to the breathless reporting would give you a very skewed perception of where we’re at.

        It’s also easy to deride your critics as just being contrarian on forums, but the same complaint happens to distract from the actual lack of real forward progress towards building a QC. We’ve made progress on all kinds of different things except for actually building a QC that can scale to actually solve non trivial problems . It’s the same critique as with fusion energy with the sole difference being that we actually understand how to build a fusion reactor, just not one that’s commercially viable yet, and fusion energy would be far more beneficial than a QC at least today.

        There’s also the added challenge that crypto computers only have one real application currently which is as a weapon to break crypto. Other use cases are generally hand waved as “possible” but unclear they actually are (ie you can’t just take any NP problem and make it faster even if you had a compute and even traveling salesman is not known to be faster and even if it is it’s likely still not economical on a QC).

        Speaking of experts, Bas is a cryptography expert with a specialty in QC algorithms, not an expert in building QC computers. Scott Aronson is also well respected but he also isn’t building QC machines, he’s a computer scientist who understands the computational theory, but that doesn’t make him better as a prognosticator if the entire field is off on a fool’s errand. It just means he’s better able to parse and explain the actual news coming from the field in context.

      • phicoh 10 hours ago
        The thing is, producing the right isotopes of uranium is mostly a linear process. It goes faster as you scale up of course, but each day a reactor produces a given amount. If you double the number of reactors you produce twice as much, etc.

        There is no such equivalent for qubits or error correction. You can't say, we produce this much extra error correction per day so we will hit the target then and then.

        There is also something weird in the graph in https://bas.westerbaan.name/notes/2026/04/02/factoring.html. That graph suggests that even with the best error correction in the graph, it is impossible to factor RSA-4 with less then 10^4 qubits. Which seems very odd. At the same time, Scott Aaronson wrote: "you actually can now factor 6- or 7-digit numbers with a QC". Which in the graph suggests that error rate must be very low already or quantum computers with an insane number of qubits exist.

        Something doesn't add up here.

        • FiloSottile 10 hours ago
          We are stretching the metaphor thin, but surely the progress towards an atomic bomb was not measured only in uranium production, in the same way that the progress towards a QC is not measured only in construction time of the machine.

          At the theory level, there were only theories, then a few breakthroughs, then some linear production time, then a big boom.

          > Something doesn't add up here.

          Please consider it might be your (and my) lack of expertise in the specific sub-field. (I do realize I am saying this on Hacker News.)

          • vlovich123 1 hour ago
            Not only, but a huge challenge was manufacturing enough fuel and was the real limiting part. They were working out hard science and engineering but more fuel definitely == bigger bomb in a very real way and it is quite linear because E=mc^2. And it was in many ways the bottleneck for the bombs - it literally guided how big they made the first bomb and the US manufactured enough for 3 - 1 test, 2 to drop
        • Strilanc 5 hours ago
          > That graph suggests that even with the best error correction in the graph, it is impossible to factor RSA-4 with less then 10^4 qubits. Which seems very odd.

          It's because the plot is assuming the use of error correction even for the smallest cases. Error correction has minimum quantity and quality bars that you must clear in order for it to work at all, and most of the cost of breaking RSA4 is just clearing those bars. (You happen to be able to do RSA4 without error correction, as was done in 2001 [0], but it's kind of irrelevant because you need error correction to scale so results without it are on the wrong trendline. That's even more true for the annealing stuff Scott mentioned, which has absolutely no chance of scaling.)

          You say you don't see the uranium piling up. Okay. Consider the historically reported lifetimes of classical bits stored using repetition codes on the UCSB->Google machines [1]. In 2014 the stored bit lived less than a second. In 2015 it lived less than a second. 2016? Less than a second. 2017? 2018? 2019? 2020? 2021? 2022? Yeah, less than a second. And this may not surprise you but yes, in 2023, it also lived less than a second. Then, in 2024... kaboom! It's living for hours [4].

          You don't see the decreasing gate error rates [2]? The increasing capabilities [3]? The ever larger error correcting code demonstrations [4]? The front-loaded costs and exponential returns inherent to fault tolerance? TFA is absolutely correct: the time to start transitioning to PQC is now.

          [0]: https://www.nature.com/articles/414883a

          [1]: https://algassert.com/assets/2025-12-24-qec-foom/plot-half-l... (from https://algassert.com/post/2503 )

          [2]: https://arxiv.org/abs/2510.17286

          [3]: https://www.nature.com/articles/s41586-025-09596-6

          [4]: https://www.nature.com/articles/s41586-024-08449-y

        • adgjlsfhk1 10 hours ago
          You can already factor a 6 digit number with a QC, but not with an algorithm that scales polynomially. The graph linked is for optimized variants of Shor's algorithm.
        • cyberax 4 hours ago
          So today you have 1 gram. No bomb. Tomorrow you have 2 grams. Still no bomb.

          ...

          365 days later, you have 365 grams after spending ungodly amounts of energy to separate isotopes. AND STILL NO BOMB! Not even a small one. These scientists are just some bullshit artists.

          52kg later: BOOM!

      • octoberfranklin 8 hours ago
        > produce at least a small nuclear explosion

        The Manhattan Project scientists actually did this before anybody broke ground at Los Alamos. It was called the Chicago Pile. And if the control rods were removed and the SCRAM disabled, it absolutely would have created a "small nuclear explosion" in the middle of a major university campus.

        Given the level of hype and how long it's been going on, I think it's totally reasonable for the wider world to ask the quantum crypto-breaking people to build a Chicago Pile first.

        https://en.wikipedia.org/wiki/Chicago_Pile-1

        • FiloSottile 8 hours ago
          TIL about the Chicago Pile! (I don't know enough about the physics to tell if it could have indeed exploded.)

          > On 2 December 1942

          https://en.wikipedia.org/wiki/Chicago_Pile-1

          > on July 16, 1945

          https://en.wikipedia.org/wiki/Trinity_(nuclear_test)

          Two years and a half. This is still a good metaphor for "once you can make a small one, the large one is not far at all."

        • rcxdude 6 hours ago
          A meltdown is not a nuclear explosion. It's not even what happens if you fail to make a nuke go off properly.
        • tptacek 8 hours ago
          What? No. No matter what anybody did with the Chicago Pile, it would never have produced a small version of a nuclear detonation.
        • defrost 3 hours ago
          In truth the Chicago Pile crowd were all about power generation and didn't think it was feasible to make a nuclear bomb ..

          ( Not impossible, more strictly "beyond reach" economically and processing wise, operating on over estimates of the effort and approach )

          They ignored letters from Albet Einstein on the topic, they ignored or otherwise disregarded several letters from the Canadian / British MAUD Committee / Tube Alloys group and it took a personal visit from an Australian for them to sit up and take note that such a thing was actually within reach .. although it'd take some man power and a few challenges along the way.

          * https://en.wikipedia.org/wiki/MAUD_Committee is one place to start on all that.

    • krastanov 3 hours ago
      > And that state of the art has not moved much in the last decade

      This is far from true. On the experimental side, gate fidelities and physical qubit numbers have increased significantly (a couple of orders of magnitude). On the theory side, error correction techniques have improved astronomically -- overhead to of error corrections has dropped by many orders of magnitude. On the error correction side progress has been feverish over the last 4 years in particular.

    • venusenvy47 11 hours ago
      His article specifically mentions that the threat is with the public key exchange, not the encryption that happens after the key exchange.
    • thhoo5886gjggy 11 hours ago
      IIRC the largest number factored still remains 21
      • brohee 20 minutes ago
        Yeah that's treating D-Wave "breaking" RSA-2048 as the fraud that it is. They didn't factor anything, they computed a square root.

        I'm still dubious about the accelerated timeline given what quite a bit of what is presented as progress in the field is fraud or borderline fraud when inspected closely. (e.g. some of the recent majorana claims by Microsoft are at best overhyped, at worst fraud)

  • tux3 12 hours ago
    This is a good take, there's really not much to argue about.

    >[...] the availability of HPKE hybrid recipients, which blocked on the CFRG, which took almost two years to select a stable label string for X-Wing (January 2024) with ML-KEM (August 2024), despite making precisely no changes to the designs. The IETF should have an internal post-mortem on this, but I doubt we’ll see one

    My kingdom for a standards body that discusses and resolves process issues.

    • adgjlsfhk1 11 hours ago
      I think the anti-hybrid argument the article makes is clearly wrong. Even if CRQCs existed today, we still should be using hybrid algorithms because even once CRQCs exist, they will be slow, expensive, and power hungry for at least a decade. The hybrid algorithms at a minimum make the cost of any attack ~$1M, which is way better than half of the PQC algorithms that made it to the 3rd stage of the PQC competition (2 of them can be broken on a laptop)
      • scythmic_waves 10 hours ago
        Is it?

        Your reasoning relies on this being true:

        > [CRQCs] will be slow, expensive, and power hungry for at least a decade

        How could you know that? What if it was 5 years? 1 year? 6 months?

        I predict there will be an insane global pivot once Q-day arrives. No nation wants to invest billions in science fiction. Every nation wants to invest billions in a practical reality of being able to read everyone's secrets.

        • adgjlsfhk1 10 hours ago
          The absolute low end of cost of a QC is the cost of an MRI machine ~100k-400k (cost of cooling the computer to super low temps). Sure we expect QCs to get faster and cheaper over time, but putting 100% faith in the security of the PQC algorithms seems like a bad idea with no upside.
          • FiloSottile 10 hours ago
            We can disagree on the tradeoff, but if you see no upside, you are missing the velocity cost of the specification work, the API design, and the implementation complexity. Plus the annoying but real social cost of all the bikeshedding and bickering.
            • adgjlsfhk1 1 hour ago
              All of those are costs are at least as high for non-hybrid. The spec and API are just as easy to design (because we have really good and simple ECC libraries), and the bikeshedding and bickering will be a lot less if people stop trying to force pure PQC algorithms that lots of people see as incredibly risky for incredibly little benefit.
          • phicoh 10 hours ago
            It is the paradox of PQC: from a classical security point of view PQC cannot be trusted (except for hash-based algorithms which are not very practical). So to get something we can trust we need hybrid. However, the premise for introducing PQC in the first place is that quantum computers can break classical public key crypto, so hybrid doesn't provide any benefit over pure PQC.

            Yes, the sensible thing to do is hybrid. But that does assume that either PQC cannot be broken by classical computers or that quantum computers will be rare or expensive enough that they don't break your classical public key crypto.

            • FiloSottile 10 hours ago
              > from a classical security point of view PQC cannot be trusted

              [citation needed]

              https://words.filippo.io/crqc-timeline/#fn:lattices

              • Tyyps 1 hour ago
                Just a little selections of recent attacks on a few post quantum assumptions:

                Isogenie/SIDH: https://eprint.iacr.org/2022/975

                Lattices: https://eprint.iacr.org/2023/1460

                Classical McEliece: https://eprint.iacr.org/2024/1193

                Saying that you can trust blindly PQ assumptions is a very dangerous take.

              • cyberax 4 hours ago
                It's purely a matter of _potential_ issues. The research on lattice-based crypto is still young compared to EC/RSA. Side channels, hardware bugs, unexpected research breakthroughs all can happen.

                And there are no downsides to adding regular classical encryption. The resulting secret will be at least as secure as the _most_ secure algorithm.

                The overhead of additional signatures and keys is also not that large compared to regular ML-KEM secrets.

                • tptacek 2 hours ago
                  No it's not. This is the wrong argument. It's telling how many people trying to make a big stink out of non-hybrid PQC don't even get what the real argument is.
                  • cyberax 59 minutes ago
                    ?

                    I'm not entirely sure what's the problem?

                    • tptacek 53 minutes ago
                      It's definitely not that "The research on lattice-based crypto is still young compared to EC/RSA."
                      • fc417fc802 7 minutes ago
                        Perhaps you would care to enlighten us ignorant plebs rather than taunting us?

                        My understanding (obviously as a non expert) matches what cyberax wrote above. Is it not common wisdom that the pursuit of new and exciting crypto is an exercise filled with landmines? By that logic rushing to switch to the new shiny would appear to be extremely unwise.

                        I appreciate the points made in the article that the PQ algorithms aren't as new as they once were and that if you accept this new imminent deadline then ironing out the specification details for hybrid schemes might present the bigger downside between the two options.

                        I mean TBH I don't really get it. It seems like we (as a society or species or whatever) ought to be able to trivially toss a standard out the door that's just two other standards glued together. Do we really need a combinatoric explosion here? Shouldn't 1 (or maybe 2) concrete algorithm pairings be enough? But if the evidence at this point is to the contrary of our ability to do that then I get it. Sometimes our systems just aren't all that functional and we have to make the best of it.

                      • cyberax 37 minutes ago
                        Uhm...?

                        As far as I know, the currently standardized lattice methods are not known to be vulnerable? And the biggest controversy seemed to be the push for inclusion of non-hybrid methods?

                        I'm not following crypto closely anymore, I stopped following the papers around 2014, right when learning-with-errors started becoming mainstream.

      • Tyyps 1 hour ago
        Indeed anti-hybrids arguments are very dangerous takes at best. People are putting a tremendous amount of faith in very understudied assumptions, in particular given the complexity of geometric relations and the structure of current lattice based scheme.
    • OhMeadhbh 12 hours ago
      I missed you at the most recent CRFG meeting.
  • ggm 6 hours ago
    This is the first well reasoned write up which makes me walk back from my "QC is irrelevant, and RSA is fine" position a bit. Well done! Thank you for putting this into terms a skeptic can relate to and understand. It helped me re-frame my thinking on risks here.
    • tennysont 4 hours ago
      A huge part of that, for me, was this from Scott Aaronson:

      > Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”

      That quote, alone, removed a lot of assumptions I had been carrying around.

      • rogerrogerr 3 hours ago
        Can anyone give the next layer of detail here? I understand the implications of this analogy, but looking for the underlying reasons the analogy is apt.
  • kro 10 hours ago
    The argument to skip hybrid keys sounds dangerous to me. These algorithms are not widely deployed and thus real world tested at all. If there is a simple flaw, suddenly any cheap crawler pwns you while you tried to protect against state actors.
  • janalsncm 12 hours ago
    Building out a supercomputer capable of breaking cryptography is exactly the kind of thing I expect governments to be working on now. It is referenced in the article, but the analogy to the Manhattan Project is clear.

    Prior to 1940 it was known that clumping enough fissile material together could produce an explosion. There were engineering questions around how to purify uranium and how to actually construct the weapon etc. But the phenomenon was known.

    I say this because there’s a meme that governments are cooking up exotic technologies behind closed doors which I personally tend to doubt.

    This is almost perfect analogy to the MP though. We know exactly what could happen if we clumped enough qubits together. There are hard engineering challenges of actually doing so, and governments are pretty good at clumping dollars together when they want to.

    • O3marchnative 10 hours ago
      > There were engineering questions around how to purify uranium and how to actually construct the weapon etc. But the phenomenon was known.

      FWIW, constructing a weapon with highly enriched uranium is, relatively, simple. At the time, the choice was made to use a gun-type weapon that shot a projectile of highly enriched uranium into a a "target" of highly enriched uranium. The scientists were so sure it would work that the design didn't necessitate a live test. This was "little boy", which was eventually dropped on Hiroshima.

      Fat Man utilized plutonium which required an implosion to compress the fissile material that would set off the chain reaction. This is a much more complex undertaking, but it's much more efficient. Namely, you need much less fissile material, and more of that fissile material is able to participate in the chain reaction. This design is what allows for nuclear tipped missiles. The same principles can be applied to a U-235 based weapon as well.

      The implosion based design is super interesting to read about. One memorable aspect is that the designers realized that applying a tamper of uranium (U-238) around the fissile material allows for significant improvement in yield. The chain reaction is exponential, so the few extra nanoseconds that the uranium keeps the fissile material together leads to significant increase in yield.

      https://en.wikipedia.org/wiki/Little_Boy

      https://en.wikipedia.org/wiki/Fat_Man

    • burnerRhodov2 50 minutes ago
      >governments are cooking up exotic technologies behind closed doors which I personally tend to doubt.

      You don't use zero days immediately. You stockpile them for when the time is right. A quantum computer is the ultimate zero day.

    • Cider9986 6 hours ago
      >I say this because there’s a meme that governments are cooking up exotic technologies behind closed doors which I personally tend to doubt.

      Like when the government made XKeyscore[1]?

      [1] https://en.wikipedia.org/wiki/XKeyscore

      • tptacek 6 hours ago
        What's exotic about XKeyScore?
    • bitexploder 11 hours ago
      The Manhattan project employed some significant % of all of America. A project of that scale will likely never happen again.

      It was also about far more than the science. It was about industrializing the entire production process and creating industrial capability that simply did not exist before.

      • janalsncm 11 hours ago
        My comment was not limited to the U.S. government.

        And the Manhattan Project cost $30B in today’s money. Compared with some of the numbers Congress has allocated recently, I’d call that a bargain.

        • bitexploder 6 hours ago
          I am skeptical you could do something of that scale for 30B today. That is just the dollar cost based on inflation. If you used CPI indexing probably hundreds of billions to a trillion dollars now.
      • bastawhiz 10 hours ago
        Does quantum computing need that though? We don't suddenly need a large, unique supply chain for these computers. We don't need to dig up the qubits and refine them. Testing doesn't blow up the computer.
      • rcxdude 6 hours ago
        The Manhattan project had a huge impact but it was not that big as far as efforts in the war went (they managed to hide the budget allocated to the project from most of congress, for example).
  • aborsy 10 hours ago
    I don’t know why the author likes AES 128 so badly. AES 256 adds little additional cost, and protects against store now decrypt later attacks (and situations like: “my opinion suddenly changed in few months”). The industry standard and general recommendation for quantum resistant symmetric encryption is using 256 bit keys, so just follow that. Every time he comes up with all sorts of arguments that AES 128 is good.

    Age should be using 256 bit file keys, and default to PC keys in asymmetric mode.

    • FiloSottile 8 hours ago
      > The industry standard and general recommendation for quantum resistant symmetric encryption is using 256 bit keys

      It simply is not. NIST and BSI specifically recommend all of AES-128, AES-196, and AES-256 in their post-quantum guidance. All of my industry peers I have discussed this with agree that AES-128 is fine for post-quantum security. It's a LinkedIn meme at best, and a harmful one at that.

      My opinion changed on the timeline of CRQC. There is no timeline in which CRQC are theorized to become a threat to symmetric encryption.

    • cwmma 10 hours ago
      he pretty explicitly states that AES 128 is not in any imminent danger and mandating a switch to 256 would distract from the actual thing he thinks needs to happen.
      • lucb1e 9 hours ago
        So why argue about whether AES-256 is worth it if we can just literally replace those 3 characters and be done with the upgrade? This was the smart move already in 2001 when Shor's algorithm was known and computers fast enough that we don't notice the difference. At least to me, it seems like less bikeshedding will be done if we abandon AES-128 and don't have to deal with all the people left wondering if that's truly ok

        Then again, something something md5. 'Just replace those bytes with sha256()' is apparently also hard. But it's a lot easier than digging into different scenarios under which md5 might still be fine and accepting that use-case, even if only for new deployments

        • brohee 10 minutes ago
          I'm working on just that in some IoT context, and a lots of chips I have to deal with only have hardware support for AES-128, so it's a little more complicated...
        • tptacek 9 hours ago
          Because you cannot "just literally replace those 3 characters and be done with the upgrade".
          • lucb1e 4 hours ago
            That would depend...

            There's a whole lot of cases where the tokens are temporary in nature with an easy cut-over, either dropping old entries or re-encrypting while people are not at work. We tend to think of big commerce like amazon or google that need 24/7 uptime, but most individual systems are not of that scale

            In most other cases you increment the version number for the new data format and copy-paste the (d)e(n)cryption code for each branch of the if statement, substituting 128 for 256. That's still a trivial change to substitute one algorithm for another

            Only if there exists no upgrade path in the first place, you have a big problem upgrading the rest of your cryptography anyway and here it's worth evaluating per-case whether the situation is considered vulnerable before doing a backwards-incompatible change. Just like how people are (still) dealing with md5

            • tptacek 2 hours ago
              The moment you say "lot of cases", multiply the cost by $100,000,000.
      • aborsy 10 hours ago
        How would he know? Did he publish papers on it?

        You can’t just throw “Grover’s algorithm is difficult to parallelize” etc. It’s not same as implementation, especially when it gets to quantum computers. It’s very specialized.

  • xoa 11 hours ago
    Yeah, sounds like it's time to take this very seriously. Sobering article to read, practical and to the point on risk posture. One brief paragraph though that I think deserves extra emphasis and I don't see in the comments here yet:

    >In symmetric encryption, we don’t need to do anything, thankfully

    This is valuable because it does offer a non-scalable but very important extra layer that a lot of us will be able to implement in a few important places today, or could have for awhile even. A lot of people and organizations here may have some critical systems where they can make a meat-space-man-power vs security trade by virtue of pre-shared keys and symmetric encryption instead of the more convenient and scalable normal pki. For me personally the big one is WireGuard, where as of a few years ago I've been able to switch the vast majority of key site-to-site VPNs to using PSKs. This of course requires out of band, ie, huffing it on over to every single site, and manually sharing every single profile via direct link in person vs conveniently deployable profiles. But for certain administrative capability where the magic circle in our case isn't very large this has been doable, and it gives some leeway there as any traffic being collected now or in the future will be worthless without actual direct hardware compromise.

    That doesn't diminish the importance of PQE and industry action in the slightest and it can't scale to everything, but you may have software you're using capable of adding a symmetric layer today without any other updates. Might be worth considering as part of low hanging immediate fruit for critical stuff. And maybe in general depending on organization and threat posture might be worth imagining a worst-case scenario world where symmetric and OTP is all we have that's reliable over long time periods and how we'd deal with that. In principle sneakernetting around gigabytes or even terabytes of entropy securely and a hardware and software stack that automatically takes care of the rough edges should be doable but I don't know of any projects that have even started around that idea.

    PQE is obviously the best outcome, we ""just"" switch albeit with a lot of increase compute and changed assumptions in protocols pain, but we're necessarily going to be leaning on a lot of new math and systems that won't have had the tires kicked nearly as long as all conventional ones have. I guess it's all feeling real now.

  • kwar13 2 hours ago
    One of the author is from the Ethereum Foundation. Super interesting paper to read. This goes beyond just cryptocurrency. I wrote about it here a while ago: https://kaveh.page/blog/bitcoin-quantum-threat
  • Tyyps 1 hour ago
    I think people have to be extremely careful with this kind of opinion. In particular seeing such a push for post-quantum crypto while the current state of the art for quantum factorisation is 15 and 21 and the fact that current assumptions (for KEM in particular) are clearly not as studied as dlog.

    It's maybe good to remember that SIDH was broken in polynomial time by a classical computer 3 years ago... I'm really concerned by the current rush for PQ solutions and what are the real intentions behind it. On a side note there might even be a world where a powerfully enough quantum computer that break 2048 bigs RSA will never exists (Hooft, Palmer... Recent quantum gravity theory).

    • mikestorrent 55 minutes ago
      The largest number factorised on a quantum computer is 8,219,999 on a D-Wave machine (a quantum annealer, so not capable of running Shor's, but capable of being an actual shipping product you can use, unlike gate model machines).

      https://www.nature.com/articles/s41598-024-53708-7

      > Overall, 8,219,999 = 32,749 × 251 was the highest prime product we were able to factorize within the limits of our QPU resources. To the best of our knowledge, this is the largest number which was ever factorized by means of a quantum annealer; also, this is the largest number which was ever factorized by means of any quantum device without relying on external search or preprocessing procedures run on classical computers.

    • fc417fc802 28 minutes ago
      As long as a hybrid approach is taken what is there to worry about? Whereas not adopting PQC in a timely manner is obviously a gamble.
  • bhaak 5 hours ago
    > They weirdly[1] frame it around cryptocurrencies and mempools and salvaged goods or something [...]

    > [1] The whole paper is a bit goofy: it has a zero-knowledge proof for a quantum circuit that will certainly be rederived and improved upon before the actual hardware to run it on will exist. They seem to believe this is about responsible disclosure, so I assume this is just physicists not being experts in our field in the same way we are not experts in theirs.

    The zero-knowledge proof may come across as something of a gimmick, but two of the authors (Justin Drake and Dan Boneh) have strong ties to cryptocurrency communities, where this sort of thing is not unusual.

    I also don’t think it’s particularly strange to focus on cryptocurrencies. This is one of the few domains where having access to a quantum computer ahead of others could translate directly into financial gain, so the incentive to target cryptocurrencies is quite big.

    Changing the cryptographic infrastructure we rely on daily is difficult, but still easier than, for example in Bitcoin, where users would need to migrate their coins to a quantum-resistant scheme (whenever such a scheme will be implemented). Given the limited transaction throughput, migrating all vulnerable coins would take years, and even then, there would remain all those coins whose keys have been lost.

    Satoshi is likely dead, incapacitated, or has lost or destroyed his keys, and thus will not be able to move his coins to safety. Even if he has still access, the movement of an estimated one million BTC, which are currently priced in by the market as to be permanently lost, would itself be a disruptive price event, regardless if done with good or bad intentions.

    If you know which way the price will go (obviously way down in this case), you can always profit from such a price move, even if Satoshi's coins were blacklisted and couldn't be sold directly.

    • TacticalCoder 4 hours ago
      > Given the limited transaction throughput, migrating all vulnerable coins would take years ...

      How? I just googled: about 55 million addresses with bitcoin in them, about 144 blocks per day, about 3000 to 5000 tx per block.

      In something like 100 days all the coins would be moved to other addresses.

      I gotta say it'd be hilarious if to speed up that migration-to-quantum-resistant-addresses process, the Bitcoin community were to finally allow bigger blocks.

      EDIT: I take it if the network had to have full blocks for 100 days, then "shit would happens". Maybe they should force an orderly move: e.g. only addresses ending with "3a" are eligible to be moved in a block whose hash ends with an "3a", etc. to prevent congestion?

  • codethief 9 hours ago
    > Trusted Execution Environments (TEEs) like Intel SGX and AMD SEV-SNP and in general hardware attestation are just f*d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hardware speeds means we are forced to accept they might not make it, and can’t be relied upon.

    Slightly off-topic but: Does anyone know what the Signal developers plan on doing there to replace SGX? I mean it's not like outside observers haven't been looking very critically at SGX usage in Signal for years (which the Signal devs have ignored), but this does seem to put additional pressure on them.

    • tptacek 9 hours ago
      Signal uses SGX for features every other mainstream E2E messenger does in serverside plaintext.
      • codethief 6 hours ago
        If by "mainstream E2E messenger" you mean WhatsApp and Facebook Messenger, sure. But I didn't realize those were the benchmark these days.
        • tptacek 6 hours ago
          What's the mainstream messenger you're considering that doesn't maintain serverside contact lists?
          • codethief 4 hours ago
            I never said I was considering any.[0] I'm strictly interested in what Signal is doing to keep (or even improve) its security guarantees.

            On that note, Signal wouldn't even depend on Intel SGX for security nearly as much if Signal PINs weren't user-chosen but instead auto-generated with enough entropy. Yes, contact discovery through phone numbers would still be challenging, but secure value recovery[1] just requires a key with enough entropy.

            [0]: For the record, Threema doesn't store your contact list server-side, unless you explicitly opt in. Similarly, now that Signal supports usernames, my understanding is that one could use the app without uploading one's contact list in plaintext.

            [1]: https://signal.org/blog/secure-value-recovery/

    • rcxdude 6 hours ago
      I'm not sure who particularly cares about the stuff Signal is doing with SGX anyway. It always struck me as a 'because we can' move and if you're paranoid enough to worry about it then you're probably paranoid enough to not trust any manufacturer-based attestation anyway (All SGX does is make Intel the root of trust, and it's not like Signal would be less secure than any other third party if SGX were broken).
      • codethief 5 hours ago
        > I'm not sure who particularly cares about the stuff Signal is doing with SGX anyway.

        Security researchers like Matthew Green seem to care[0], the Signal people surely do, I myself do, too. Isn't that enough to raise that question?

        > if you're paranoid enough to worry about it

        You make it seem like that's an outlandish thought, when in reality there have been tons of reported vulnerabilities for SGX. And now QC represents another risk.

        > it's not like Signal would be less secure than any other third party if SGX were broken

        That's a weird benchmark. Shouldn't Signal rather be measured by whether it lives up to the security promises it makes? Signal's whole value proposition is that it's more secure than "third parties".

        [0]: https://blog.cryptographyengineering.com/category/signal/

  • wuiheerfoj 4 hours ago
    I buy the argument 'we should prepare for Q-Day as crypto agility is hard', but the newest paper doesn’t change the timeline meaningfully.

    Given TFA accepts that error correction is the bottleneck for progress, and the gap between any and lots of error correction is small, and we presently have close to 0 error correction then nothing has practically changed with reduced qubit requirements.

    Of course, it’s totally fine to have and announce a change of view on the topic, though I don’t see how the Google paper materially requires it.

  • scorpionfeet 11 hours ago
    This is exactly how customers who do threat modeling see PQC. HN can armchair QB this all they want, the real money is moving fast to migrate.

    The analogy to a small atomic bomb is on point.

  • palata 11 hours ago
    What is the consequence on e.g. Yubikeys (or say the Android Keystore)? Do I understand correctly that those count as "signature algorithms" and are a little less at risk than "full TEEs" because there is no "store now, decrypt later" for authentication?

    E.g. can I use my Yubikey with FIDO2 for SSH together with a PQ encryption, such that I am safe from "store now, decrypt later", but can still use my Yubikey (or Android Keystore, for that matter)?

    • FiloSottile 11 hours ago
      This article is more aimed at those specifying and implementing WebAuthN and SSH, than at those using them.

      They/we need to migrate those protocols to PQ now, so that you all can start migrating to PQ keys in time, including the long tail of users that will not rotate their keys and hardware the moment the new algorithms are supported.

      For example, it might be too late to get anything into Debian for it to be in oldstable when the CRQCs come!

      • palata 9 hours ago
        > This article is more aimed at those specifying and implementing WebAuthN and SSH, than at those using them.

        Sure, I'm just trying to understand the consequences of that. Felt great to finally have secure elements on smartphones and laptops (or Yubikeys), protecting against the OS being compromised (i.e. "you access my OS, but at least you can't steal my keys").

        I was wondering if PQ meant that when it becomes reality, we just get back to a world where if our OS is compromised, then our keys get compromised, too. Or if there is a middle ground in the threat model, e.g. "it's okay to keep using your Yubikey, because an attacker would need to have physical access to your key, specialised hardware AND access to a quantum computer in order to break it". Versus "you can stop bothering about security keys because with "store now, decrypt later", everything you do today with your security keys will anyway get broken with quantum computers eventually".

        • FiloSottile 8 hours ago
          If you are doing authentication with those hardware keys, you will probably be fine, if we do our job fast enough. Apple's Secure Enclave already supports some PQ signatures (although annoyingly not ML-DSA-44 apparently?) and I trust Yubico is working on it.

          If you are doing encryption, then you do have reason to worry, and there aren't great options right now. For example if you are using age you should switch to hybrid software ML-KEM-768 + hardware P-256 keys as soon as they are available (https://github.com/str4d/age-plugin-yubikey/pull/215). This might be a scenario in which hybrids provide some protection, so that an attacker will need to compromise both your OS and have a CRQC. In the meantime, depending on your threat model and the longevity of your secrets (and how easily they can rotated in 1-2 years), it might make sense to switch to software PQ keys.

          • palata 8 hours ago
            Thanks a lot, that helps!

            > This might be a scenario in which hybrids provide some protection, so that an attacker will need to compromise both your OS and have a CRQC.

            Did you mean "your OS and have a CRQC" here, or "your Yubikey and have a CRQC"?

            • FiloSottile 7 hours ago
              I mean "your OS and have a CRQC" because they will need to compromise the software PQ key by compromising the OS, and derive the hardware YubiKey private key using the CRQC.
              • palata 7 hours ago
                Oh right, I got it now!
    • amluto 11 hours ago
      Your Yubikey itself is doomed.

      If you are doing a post-quantum key exchange and only authenticating with the Yubikey, then you are safe from after-the-fact attacks. Well, as long as the PQ key exchange holds up, and I am personally not as optimistic about that as I’d like to be.

      • palata 9 hours ago
        > If you are doing a post-quantum key exchange and only authenticating with the Yubikey, then you are safe from after-the-fact attacks.

        Let me rephrase it to see if I understand correctly: so it is fine to keep using my security keys today for authentication (e.g. FIDO2?), but everything else should use PQ algorithm because the actual data transfers can be stored now and decrypted later.

        Meaning that today (and for a few years), my Yubikey still protects me from my key being stolen when my OS is compromised.

        Correct?

        • amluto 4 hours ago
          Sounds right to me.
      • elevation 9 hours ago
        Looking forward to a PQ yubikey rev. I would buy a box of them today so I could start experimenting!

        Another challenge of the transition is how much silicon we have yet to even implement. Smart cards? Mobile acceleration/offloading? We're at the mercy of vendors.

      • ls612 9 hours ago
        Is this also true for other TPM/snitching/DRM chips out there? IE will every existing device eventually become jailbreakable in the future or will we unfortunately not even get that benefit from all this?
        • ameliaquining 7 hours ago
          The timeline here is for when major governments have access to CRQCs. It will be much longer than that (barring an AI singularity or something) before you have access to one.
  • btdmaster 8 hours ago
    > “Doesn’t the NSA lie to break our encryption?” No, the NSA has never intentionally jeopardized US national security with a non-NOBUS backdoor, and there is no way for ML-KEM and ML-DSA to hide a NOBUS backdoor.

    The most concrete issue for me, as highlighted by djb, is that when the NSA insists against hybrids, vendors like telecommunications companies will handwrite poor implementations of ML-KEM to save memory/CPU time etc. for their constrained hardware that will have stacks of timing side channels for the NSA to break. Meanwhile X25519 has standard implementations that don't have such issues already deployed, which the NSA presumably cannot break (without spending $millions per key with a hypothetical quantum attack, a lot more expensive than side channels).

    • Avamander 8 hours ago
      > The most concrete issue for me, as highlighted by djb, is that when the NSA insists against hybrids

      The fact that only NSA does that and they really have no convincing arguments seems like the biggest reason why the wider internet should only roll out hybrids. Then possibly wait decades for everything to mature and then reconsider plain modes of operation.

    • FiloSottile 8 hours ago
      Thus succeeding at making the telecommunications vendors used for Top Secret US national security data less secure, the obvious goal of the US National Security Agency, and the only reason they wouldn't use the better cryptography designed by Dr. Bernstein. /s

      Truly, truly can't understand why anyone finds this line of reasoning plausible. (Before anyone yells Dual_EC_DRBG, that was a NOBUS backdoor, which is an argument against the NSA promoting mathematically broken cryptography, if anything.)

      Timing side channels don't matter to ephemeral ML-KEM key exchanges, by the way. It's really hard to implement ML-KEM wrong. It's way easier to implement ECDH wrong, and remember that in this hypothetical you need to compare to P-256, not X25519, because US regulation compliance is the premise.

      (I also think these days P-256 is fine, but that is a different argument.)

      • cassonmars 4 hours ago
        I genuinely do not understand how someone working in the capacity that you do, for things that matter universally for people, can contend that an organization who is intentionally engaging in NOBUS backdoors can be remotely trusted at all.

        That is insanely irresponsible and genuinely concerning. I don't care if they have a magical ring that defies all laws of physics and assuredly prevents any adversary stealing the backdoor. If an organization is implementing _ANY_ backdoor, they are an adversary from a security perspective and their guidance should be treated as such.

        • FiloSottile 4 hours ago
          The world just doesn’t work in such a binary way. Forming a mental model of an entity’s incentives, goals, capabilities, and dysfunctions will serve you much better than making two buckets for trusted parties and adversaries.
          • cassonmars 4 hours ago
            As you are someone building cryptographic libraries used by people all over the world, which includes those who might be seen as "enemies" by the organization in question, this is not a gradient — it's quite binary in nature.
      • raron 6 hours ago
        > Thus succeeding at making the telecommunications vendors used for Top Secret US national security data less secure, the obvious goal of the US National Security Agency

        NSA still has the secret Suite A system for their most sensitive information. If they think that is better than the current public algorithms and their goal is to make telecommunications vendors to have better encryption, then why doesn't they publish those so telco could use it?

        > Truly, truly can't understand why anyone finds this line of reasoning plausible. (Before anyone yells Dual_EC_DRBG, that was a NOBUS backdoor, which is an argument against the NSA promoting mathematically broken cryptography, if anything.)

        The NSA weakened DES against brute-force attack by reducing the key size (while making it stronger against differential cryptanalysis, though).

        https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA's...

        Also NSA put a broken cipher in the Clipper Chip (beside all the other vulnerabilities).

      • btdmaster 6 hours ago
        > Thus succeeding at making the telecommunications vendors used for Top Secret US national security data less secure, the obvious goal of the US National Security Agency, and the only reason they wouldn't use the better cryptography designed by Dr. Bernstein. /s

        I guess the NSA thinks they're the only one that can target such a side channel, unlike, say, a foreign government, which doesn't have access to the US Internet backbone, doesn't have as good mathematicians or programmers (in NSA opinion), etc.

        > Timing side channels don't matter to ephemeral ML-KEM key exchanges, by the way. It's really hard to implement ML-KEM wrong. It's way easier to implement ECDH wrong, and remember that in this hypothetical you need to compare to P-256, not X25519, because US regulation compliance is the premise.

        Except for KyberSlash (I was surprised when I looked at the bug's code, it's written very optimistically wrt what the compiler would produce...)

        So do you think vendors will write good code within the deadlines between now and... 2029? I wouldn't bet my state secrets on that...

        • FiloSottile 6 hours ago
          > KyberSlash

          That's a timing side-channel, irrelevant to ephemeral key exchanges, and tbh if that's the worst that went wrong in a year and a half, I am very hopeful indeed.

  • upofadown 9 hours ago
    So this is the exciting paper:

    * https://arxiv.org/pdf/2603.28627

    The new thing here seems to be the use of the neutral atom technique. Supposedly we are up to 96 entangled qubits for a second or two based on neutral atoms.

    Shouldn't that be enough capability to factor 15 using Shor's?

  • EdNutting 5 hours ago
    The OP should take a look at Secqai - potentially serverclass motherboard management processors and beyond which will implement PQ security and hardware enforced memory safety (from what I recall):

    https://www.secqai.com/

    • EdNutting 5 hours ago
      This is in response to the comments in the article about TEEs and similar - such as TPMs - not catching up fast enough.

      Also companies like PQShield.

      The hardware (IP) exists to solve this in time, and is being integrated into products gradually.

      No idea how widespread it will become or over what timescale.

  • sans_souse 1 hour ago
    I know this may be outside of scope but I am very curious as to any thoughts you may have of a potential for ternary system at hw level?
  • Animats 11 hours ago
    We'll know it's been cracked when all the lost Bitcoins start to move.
    • oncallthrow 8 hours ago
      No, it will likely be a state actor who reaches it first, who will never give away such a capability so easily
    • xvector 10 hours ago
      The bitcoins won't move until the technology is commoditized (ie well past mainstream usage by the government.)

      Having PQ and your adversaries not knowing is far more valuable than the few hundred billion you could get from cracking (and tanking) BTC.

    • sunshine-o 11 hours ago
      Yep, I was looking into it and from what I understand:

      - There is a dark outlook on Bitcoin as the community and devs can't seem to coordinate. Especially on what to do with the "Satoshi coins"

      - Ethereum has a hard but clear path (pretty much full rewrite) with a roadmap [0]

      - The highly optimized "fast chains" (Solana & co) are in a lot of trouble too.

      It would be funny if Bitcoin the asset end up migrating to Ethereum as another erc20 token

      - [0] https://pq.ethereum.org/

      • PretzelPirate 10 hours ago
        > pretty much full rewrite

        This is far from my understanding. Changing out this signature scheme is hard work, but doesn't require a rewrite of the VM.

        • sunshine-o 9 hours ago
          Ethereum is way more complex than let's say Bitcoin and all parts are affected. This is not just the "signature scheme".

          The fact that the signature size is multiplied by ~10 will greatly affect things like blockspace (what I guess is even more a problem with Bitcoin !)

          Also they are the only blockchain I believe that put an emphasis on allowing large number of validators to run on very modest hardware (in the ballpark of a RPI, N100 or phone).

          My understanding is they will need to pack it with a larger upgrade to solve all those problems, the so called zkVM/leanVM roadmap.

          And then there are the L2 that are an integral part of the ecosystem.

          So this is the greatest upgrade ever made on Ethereum, pretty much full rewrite, larger than the transition to proof of stake. I remember before the Proof of Stake migration they were planning to redo the EVM too (with something WASM based at the time) but they had to abandon their plan. Now it seems there is no choice but to do it.

      • nullc 9 hours ago
        Adding new signature schemes to bitcoin is relatively trivial and has been done previously (today Bitcoin supports both schnorr and ecdsa signatures).

        Existing PQ standards have signatures with the wrong efficiency tradeoffs for usage in Bitcoin-- large signatures that are durable against a lot of use and supports fast signing, while for Bitcoin signature+key size is critical, keys should be close to single use, and signing time is irrelevant.

        To the extent that I've seen any opposition related to this isn't only been in related to schemes that were to inefficient or related to proposals to confiscate the assets of people not adopting the proponent's scheme (which immediately raises concerns about backdoors and consent).

        There is active development for PQ signature standards tailored to Bitcoin's needs, e.g. https://delvingbitcoin.org/t/shrimps-2-5-kb-post-quantum-sig... and I think progress looks pretty reasonable.

        Claims that there is no development are as far as I can tell are just backscatter from a massive fraud scheme that is ongoing (actually, at least two distinct cons with an almost identical script). There are criminal fraudsters out seeking investments in a scheme to raise money to build a quantum computer and steal Bitcoins. One of them reportedly has raised funds approaching a substantial fraction of a billion dollars from victims. For every one sucker they convince to give them money, they probably create 99 others people panicked about it (since believing it'll work is a pre-req to handing over your money).

        • olalonde 2 hours ago
          > proposals to confiscate the assets of people not adopting the proponent's scheme (which immediately raises concerns about backdoors and consent)

          They're going to lose those assets regardless, either to the first hacker with a QC or via a protocol-level burn. The latter is arguably better for the network's long-term health, as it reduces circulating supply rather than subsidizing an attacker.

          I can understand disagreeing about timelines but is there a flaw in the logic that once the underlying crypto is broken, "consent" is a moot point?

        • ArguMisrepre 2 hours ago
          [dead]
  • kro 10 hours ago
    I wonder, what is the impact of this to widely deployed smartcards like credit cards / EID passports?

    Aren't they relying on asymmetrical signing aswell?

    • lucb1e 9 hours ago
      Yes. They will need to switch, so that hardware needs to be swapped out
  • krunck 11 hours ago
    This would also be a good time for certain governments to knowingly push broken PQ KE standards while there is a panicked rush to get PQ tech in place.
    • FiloSottile 11 hours ago
      Remember that the entities most likely to heed those governments recommendations are those providing services to said government and its military.

      I feel like the NSA pushing a (definitely misguided and obviously later exploited by adversaries) NOBUS backdoor has poorly percolated into the collective consciousness, missing the NOBUS part entirely.

      See https://keymaterial.net/2025/11/27/ml-kem-mythbusting/ for whether the current standards can hide NOBUS backdoors. It talks about ML-KEM, but all recent standards I read look like this.

      • adgjlsfhk1 10 hours ago
        IMO the idea that NSA only uses NOBUS backdoors is obviously false (see for example DES's 56 bit key size). The NSA is perfectly capable of publicly calling for an insecure algorithm and then having secret documentation to not use it for anything important.
        • FiloSottile 10 hours ago
          DES is the algorithms that was secretly modified by the NSA to protect it against differential cryptanalysis. Capping a key size is hardly a "backdoor."

          Also, that was the time of export ciphers and Suite A vs Suite B, which were very explicit about there being different algorithms for US NatSec vs. everything else. This time there's only CNSA 2.0, which is pure ML-KEM and ML-DSA.

          So no, there is no history of the NSA pushing non-NOBUS backdoors into NatSec algorithms.

        • bawolff 10 hours ago
          > see for example DES's 56 bit key size

          In fairness, that was from 1975. I don't particularly trust the NSA, but i dont think things they did half a century ago is a great way to extrapolate their current interests.

          • raron 6 hours ago
            AFAIK they did a lot of illegal things in the Snowden-era, too.
    • some_furry 11 hours ago
      Which governments are you thinking of?
  • pdhborges 13 hours ago
    What do you recomend as reading material for someone that was in college a while ago (before AE modes got popular) to get up to speed with the new PQ developments?
    • FiloSottile 13 hours ago
      If you want something book-shaped, the 2nd edition of Serious Cryptography is updated to when the NIST standards were near-final drafts, and has a nice chapter on post-quantum cryptography.

      If you want something that includes details on how they were deployed, I'm afraid that's all very recent and I don't have good references.

  • griffzhowl 10 hours ago
    noob question: can't we just use longer classical keys, at least as a stop gap?
    • adgjlsfhk1 9 hours ago
      They're a pretty bad stopgap: https://bas.westerbaan.name/notes/2026/04/02/factoring.html. Going to RSA-32000 only buys you ~a year once QCs can factor RSA-2048. In order to get a standard that would resist quantum attacks for realistic time, we would need MB to GB keys at least (see https://eprint.iacr.org/2017/351.pdf for a hilarious post-quantum RSA attempt that used terabyte size keys)
    • oncallthrow 8 hours ago
      No, and even if we could, it would require a migration of approaching the same difficulty of a migration to PQ, at which point why not just migrate to PQ
  • OhMeadhbh 12 hours ago
    In rebuttal, Peter Gutmann seems to think the progress towards quantum computing devices which can break commonly used public key crypto systems is not moving especially quickly: https://eprint.iacr.org/2025/1237
    • schmichael 12 hours ago
      That's not a rebuttal. The post references the paper and a rebuttal to it from an expert in the field.
      • john_strinlai 10 hours ago
        >and a rebuttal to it from an expert in the field.

        while i agree with filippo, the way you worded this makes me think that you may not be aware that gutmann is also an expert in the field. so, if you are giving filippo weight because he is an expert, it is worth giving some amount to gutmann as well.

        • schmichael 10 hours ago
          I apologize if I flippantly dismissed the fact that experts disagree. That was not my intention. I was trying to point out that OP does address the referenced counter-point post specifically.
          • john_strinlai 10 hours ago
            >Sorry if I flippantly dismissed the fact that experts disagree!

            i dont really get your reply/insincere apology.

            if you are going to bother mentioning filippo's expertise in the first place, its just weird to frame it the way you did. that is how someone would typically dismiss some random blogger with an appeal to authority. but if both people are authorities, it doesnt make sense.

            if you already knew, than my comment can be context for future readers that dont and might just dismiss gutmann as a non-expert getting rebutted by an expert.

      • OhMeadhbh 12 hours ago
        Damn. It's like I insulted Vault.

        Also, I went over Filippo's post again and still can't see where it references the Gutmann / Neuhaus paper. Are we talking about the same post?

        • tkhattra 11 hours ago
          From Filippo's post: "Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums."
          • commandersaki 10 hours ago
            Is that even a rebuttal? Seems like just a dismissal without any substance. I expect in 10 years the predictions will be wrong, kind of like Y2K all over again.
          • OhMeadhbh 9 hours ago
            If only we had a technology where an author could specify a unique identifier and name of another author's paper. Something that could cite a different paper and link to it.
        • xvector 11 hours ago
          From the abstract:

          > This paper presents implementations that match and, where possible, exceed current quantum factorisation records using a VIC-20 8-bit home computer from 1981, an abacus, and a dog.

          From the link:

          > Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise[1]. As Scott Aaronson said[2]:

          > > Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”

          [1]: https://bas.westerbaan.name/notes/2026/04/02/factoring.html

          [2]: https://scottaaronson.blog/?p=9665#comment-2029013

  • amluto 11 hours ago
    I was in this field a while back, and I always found it baffling that anyone ever believed in the earlier large estimates for the size of a quantum computer needed to run Shor's algorithm. For a working quantum computer, Shor's algorithm is about as difficult as modular exponentiation or elliptic curve scalar multiplication: if it can compute or verify signatures or encrypt or decrypt, then it can compute discrete logs. To break keys of a few hundred bits, you need a few hundred qubits plus not all that much overhead. And the error correction keeps improving all the time.

    Also...

    > Trusted Execution Environments (TEEs) like Intel SGX and AMD SEV-SNP and in general hardware attestation are just f**d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hardware speeds means we are forced to accept they might not make it, and can’t be relied upon.

    This part is embarrassing. We’ve had hash-based signatures that are plenty good for this for years and inspire more confidence for long-term security than the lattice schemes. Sure, the private keys are bigger. So what?

    We will also need some clean way to upgrade WebAuthn keys, and WebAuthn key management currently massively sucks.

    • hujun 9 hours ago
      > Trusted Execution Environments (TEEs) like Intel SGX and AMD SEV-SNP and in general hardware attestation are just f*d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hardware speeds means we are forced to accept they might not make it, and can’t be relied upon.

      compare to SGX, a more critical impacted component is TPM chip, secured/measured boot depends on TPM, and cost of replacing all servers and OS ...

      • amluto 7 hours ago
        A lot of TPMs are “fTPM”s, which are implemented in something resembling software. It’s an open question whether the hardware in question has usable roots of trust, but a lot of TPM applications don’t actually require endorsement. And some servers have plug-in TPMs.

        Of course, many critical components on a motherboard and CPU verify their firmware using non-post-quantum keys, which is another issue.

  • vasco 36 minutes ago
    > I simply don’t see how a non-expert can look at what the experts are saying, and decide “I know better, there is in fact < 1% chance.” Remember that you are betting with your users’ lives.

    Problem is the experts don't tell the truth, they say whatever game theory version of the world they came up with will make people do what they think people should do. If experts just said the literal truth it'd be different, and then when they would walk it back would be understandable.

    But when later it becomes clear the experts said outright lies because they thought it'd induce the right behavior, that goes out the window.

  • nodesocket 8 hours ago
    The first and most obvious target will be Bitcoin. It’s market cap today is $1.4T. That’s a gigantic reward for any state actor or entity with the resources and budget to break it.

    Does this mean Bitcoin is going to $0? Absolutely not, it’s just going to take the community organizing and putting in the gigantic effort to make the changes. Frankly I’m not personally clear if that means all existing cold wallets need to be flashed/replaced? All existing Bitcoin miner software needs to be updated? All existing Bitcoin node software needs to be updated?

    • goalieca 6 hours ago
      Bitcoin value will plummet if people lose trust in it.
  • atlasagentsuite 6 hours ago
    [flagged]
  • burnerRhodov2 1 hour ago
    TlDR:

    The real problem is building a system that can survive noise, errors, and decoherence. Once you solve that, scaling it up is non-trivial but has a very exponential path.

  • bjourne 11 hours ago
    > Traveling back from an excellent AtmosphereConf 2026, I saw my first aurora, from the north-facing window of a Boeing 747.

    Given the author's "safety first" stance on pqc, it seems a bit incongruent to continue to fly to conferences...

  • commandersaki 10 hours ago
    RemindMe! 3 years "impending doom"
  • vonneumannstan 13 hours ago
    This seems like something uniquely suited to the startup ecosystem. I.e. offering PQ Encryption Migration as a Service. PQ algorithms exist and now theres a large lift required to get them into the tech with substantial possible value.
    • hlieberman 12 hours ago
      … really? This is simultaneously so far down in the plumbing and extremely resistant to measuring the impact of, I can’t imagine anyone building a company off of this that’s not already deep in the weeds (lookin’ at you, WolfSSL).

      The idea that a startup would be competitive in the VC “the only thing that matters are the feels” environment seems crazy to me.

      • OhMeadhbh 12 hours ago
        Yeah... I spent the 90s working for RSADSI and Certicom implementing algorithms. Crypto is a vitamin, not an aspirin. Hardly anyone is capable of properly assessing risk in general, much less the technical world of information risk management. Telling someone they should pay you money to reduce the impact of something that may or may not happen in the future is not a sales win.
  • Sparkyte 12 hours ago
    There is always a price to encryption. The cost goes up the more you have to cater to different and older encryptions while supporting the latest.
  • OsrsNeedsf2P 12 hours ago
    Why do we "need to ship"? 1,000 qubit quantum computers are still decades away at this point
    • OhMeadhbh 12 hours ago
      So... In 2013 I was working for Mozilla adding TLS 1.1 and 1.2 support into Firefox. It turns out that some of the extensions common in 1.1, in some instances caused PDUs to grow beyond 16k (or maybe it was 32k, can't remember.). This caused middle boxes to barf. Sure, they shouldn't barf, but they did. We discovered the problem (or rather one of our users discovered the problem) by increasing the key size on server and client certs to push PDU sizes over the limit.

      At the very least, you want to start using hybrid legacy / pqc algorithms so engineers at Cisco will know not to limit key sizes in PDUs to 128 bytes.

      • ekr____ 11 hours ago
        A few points here: There is already very wide use of PQ algorithms in the Web context [0], which is the most problematic one because clients need to be able to connect to any site and there's no real coordination between sites and clients. So we're exercising the middleboxes already.

        The incident you're thinking of doesn't sound familiar. None of the extensions in 1.1 really were that big, though of course certs can get that big if you work hard enough. Are you perhaps thinking instead of the 256-511 byte ClientHello issue addressed ion [1]

        [0] https://blog.cloudflare.com/pq-2025/ [1] https://datatracker.ietf.org/doc/html/rfc7685

        • OhMeadhbh 8 hours ago
          Oh hey Eric. I think I was wrong saying it was 1.1. It was a middlebox that ignored max fragment negotiation, which I think was introduced in 1.2. IIRC, the middlebox claimed to support it for 1.2 connections, but silently failed by blackholing the connection. They eventually crafted a fix, but it was an annoying year waiting for network operators to upgrade the firmware on their routers.
      • kevvok 2 hours ago
        This issue is already being tracked by a shiny website: https://tldr.fail/
  • munrocket 12 hours ago
    Yes, this is why I invested in QRL crypto. With lates updates and no T1 exchange it looks like a good opportunity to grow.