OCSP Service Has Reached End of Life

(letsencrypt.org)

194 points | by pfexec 16 hours ago

6 comments

  • lol768 14 hours ago
    The ship has very much sailed now with ballot SC63, and this is the result, but I still don't think CRLs are remotely a perfect solution (nor do I think OCSP was unfixable). You run into so many problems with the size of them, the updates not propagating immediately etc. It's just an ugly solution to the problem, that you then have to introduce further hacks (Bloom filters) atop of it all to make the whole mess work. I'm glad that Mozilla have done lots of work in this area with CRLite, but it does all feel like a bodge.

    The advantages of OCSP were that you got a real-time understanding of the status of a certificate and you had no need to download large CRLs which become stale very quickly. If you set security.ocsp.require in the browser appropriately then you didn't have any risk of the browser failing open, either. I did that in the browser I was daily-driving for years and can count on one hand the number of times I ran into OCSP responder outages.

    The privacy concerns could have been solved through adoption of Must-Staple, and you could then operate the OCSP responders purely for web-servers and folks doing research.

    And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?

    • ekr____ 13 hours ago
      The problem with requiring OCSP stapling is that it's not practically enforceable without breakage.

      The underlying dynamics of any change to the Web ecosystem is that it has to be incrementally deployable, in the sense that when element A changes it doesn't experience breakage with the existing ecosystem. At present, approximately no Web servers do OCSP stapling, so any browser which requires it will just not work. In the past, when browsers want to make changes like this, they have had to give years of warning and then they can only actually make the change once nearly the entire ecosystem has switched and so you have minimal breakage. This is a huge effort an only worth doing when you have a real problem.

      As a reference point, it took something like 7 years to disable SHA-1 in browsers [0], and that was an easier problem because (1) CAs were already transitioning (2) it didn't require any change to the servers, unlike OCSP stapling which requires them to regularly fetch OCSP responses [1] and (3) there was a clear security reason to make the change. By contrast, with Firefox's introduction of CRLite, all the major browsers now have some central revocation system, which works today as opposed to years from now and doesn't require any change to the servers.

      [0] https://security.googleblog.com/2014/09/gradually-sunsetting... [1] As an aside it's not clear that OCSP stapling is better than short-lived certs.

      • lol768 12 hours ago
        I think you are correct. There were similar issues with Firefox rolling out SameSite=Lax by default, and I think those plans are now indefinitely on hold as a result of the breakage it caused. It's a hard problem to solve.

        > As an aside it's not clear that OCSP stapling is better than short-lived certs.

        I agree this should be the end goal, really.

        • catlifeonmars 37 minutes ago
          Oh wow. I thought SameSite=Lax by default was a done deal. It shows how much I’ve been following in the past few years.
    • woodruffw 12 hours ago
      > Why is that somehow okay, but OCSP not?

      I think the argument isn’t that it’s okay, but that one bad thing doesn’t mean we should do two bad things. Just because my DNS provider can see my domain requests doesn’t mean I also want arbitrary CAs on the Internet to also see them.

      • dogma1138 10 hours ago
        I never understood why they didn’t tried to push OSCP into DNS.

        You have to trust the DNS server more than you trust the server you are reaching out to as the DNS server can direct you anywhere as well as see everything you are trying to access anyhow.

        • cortesoft 6 hours ago
          TLS is to protect you from malicious actors somewhere along your connection path. DNS can't help you.

          Just imagine you succeeded in inventing a perfectly secure DNS server. Great, we know this IP address we just got back is the correct one for the server.

          Ok, then I go to make a connection to that IP address, but someone on hop 3 of my connection is malicious, and instead of connecting me to the IP, just sends back a response pretending to be from that IP. How would I discover this? TLS would protect me from this, perfectly secure DNS won't.

        • blackcatsec 39 minutes ago
          Sounds like something DANE could be used for.
        • cyphar 8 hours ago
          Because one of the main things TLS is intended to defend against is malicious / MITM'd DNS servers? If DNS was trustworthy then the entirety of TLS PKI would be entirely redundant...
          • crote 19 minutes ago
            Does it, though?

            In practice, TLS certificates are given out to domain owners, and domain ownership is usually proven by being able to set a DNS record. This means compromise of the authorative DNS server implies compromise of TLS.

            Malicious relaying servers and MitM on the client is already solved by DNSSEC, so it's not adding anything there either.

            If we got rid of CAs and stored our TLS public keys in DNS instead, we would lose relatively little security. The main drawback I can think of is the loss of certificate issuance logs.

          • catlifeonmars 29 minutes ago
            > If DNS was trustworthy then the entirety of TLS PKI would be entirely redundant

            I’m not sure I understand the logic here. To me TLS PKI and DNS are somewhat orthogonal.

          • jve 1 hour ago
            > then the entirety of TLS PKI would be entirely redundant...

            Don't think I agree with this. TLS is important against MITM scenarios - integrity, privacy. You don't need DNS for this to be abused but a man in the middle - whether that is some open wifi, ISP or tapped into your network any other way.

        • woodruffw 10 hours ago
          How would that work in the current reality of the DNS? The current reality is that it’s unauthenticated and indeterminately forwarded/cached, neither of which screams success for timely, authentic OCSP responses.
          • dogma1138 10 hours ago
            Similarly to how OCSP stapling was supposed to work.
            • woodruffw 9 hours ago
              “Supposed to” being operative, I think!
    • dadrian 9 hours ago
      OCSP stapling, when done correctly with fallback issuance, is just a worse solution than short-lived certificates. OCSP lifetimes are 10 days. I wrote about this some here [1].

      [1]: https://dadrian.io/blog/posts/revocation-aint-no-thang/

    • PunchyHamster 12 hours ago
      It's funny that putting some random records in DNS is enough to have enough "ownership" to make a cert for one but we can't use same method for publishing revoking
      • ocdtrekkie 12 hours ago
        The entire existence of CAs is a pointless and mystical venture to ensure centralized control of the Internet that, since now entirely domain-validated, provides absolutely no security benefits over DNS. If your domain register/name server provider is compromised, CAs are already a lost cause.
        • ekr____ 9 hours ago
          This isn't correct, because your domain name server may be insecure even while the one used by the CA is secure. Moreover, CT helps detect misissuance but does not detect incorrect responses by your resolver.
          • ocdtrekkie 4 hours ago
            If someone can log into your domain registrar account or your web host, they can issue themselves a complete valid certificate. It won't matter if the CA resolver is secure, because the attacker can successfully validate domain control.
            • ekr____ 48 minutes ago
              Yes, that's correct. The purpose of the WebPKI and TLS is not to protect against this form of attack but rather to protect against compromise of the network between the client and the server.
        • tptacek 11 hours ago
          The DNS is more centralized than the WebPKI.
          • teddyh 5 hours ago
            DNS isn’t centralized; it’s federated. I mean, just because there’s an ISO and a UN does not mean there is a single world government.

            (Repost: <https://news.ycombinator.com/item?id=38695674>)

          • otabdeveloper4 4 hours ago
            No. You can host your own DNS. It's easy and practically free.
            • peanut-walrus 1 hour ago
              Your TLD registry operator still technically remains fully in control of your records. I am actually surprised more of them have not abused their power so far.
              • crote 6 minutes ago
                Most TLD operators are non-profit foundations set up by nerds in the early days of the internet, well before the lawyers, politicians, and MBAs could get their hands on it.

                If you want to see what happens otherwise, just look at the gTLD landscape. Still, genuine power abuse is relatively rare, because to a large extent they are selling trust. If you start randomly taking down domains, nobody will ever risk registering a domain with you again.

          • ocdtrekkie 10 hours ago
            Three browser companies on the west coast of the US effectively control all decisionmaking for WebPKI. The entire membership of the CA/B is what, a few dozen? Mostly companies which have no reason to exist except serving math equations for rent.

            How many companies now run TLDs? Yeah, .com is centralized, but between ccTLDs, new TLDs, etc., tons. And domain registrars and web hosts which provide DNS services? Thousands. And importantly, hosting companies and DNS providers are trivially easy to change between.

            The idea Apple or Google can unilaterally decide what the baseline requirements should be needs to be understood as an existential threat to the Internet.

            And again, every single requirement CAs implement is irrelevant if someone can log into your web host. The entire thing is an emperor has no clothes thing.

            • tptacek 10 hours ago
              Incoherent. Browser vendors exert control by dint of controlling the browsers themselves, and are in the picture regardless of the trust system used for TLS. The question is, which is more centralized: the current WebPKI, which you say is also completely dependent on the DNS but involves more companies, or the DNS itself, which is axiomatically fewer companies?

              I always love when people bring the ccTLDs into these discussions, as if Google could leave .COM when .COM's utterly unaccountable ownership manipulates the DNS to intercept Google Mail.

              • teddyh 5 hours ago
                > when .COM's utterly unaccountable ownership manipulates the DNS to intercept Google Mail.

                Why is this more likely to happen than a rogue CA issuing a false certificate?

                Also, Google has chosen to trust .com instead of using one of their eleven TLDs that they own for their own exclusive use, or any of the additional 22 TLDs that they also operate.

                • akerl_ 55 minutes ago
                  When a rogue CA issues a bad cert, they get delisted from all major browsers and are effectively destroyed.

                  That isn’t possible with .com

    • gerdesj 11 hours ago
      "And let's not pretend users aren't already sending all the hostnames they are visiting to their selected DNS server. Why is that somehow okay, but OCSP not?"

      Running your own DNS server is rather easier than messing with OCSP. You do at least have a choice, even if it is bloody complicated.

      SSL certs (and I refuse to call them TLS) will soon have a required lifetime of forty something days. OCSP and the rest becomes moot.

      • dogma1138 10 hours ago
        You still are reaching out to authoritative servers for that domain so someone else other than the destination knows what you are looking for.

        The 47 day life expectancy isn’t going to come until 2029 and it might get pushed.

        Also 47 days is still too long if certificates are compromised.

        • the8472 2 hours ago
          The authoritative servers for a domain are likely to be operated by the same entity as the domain itself.
        • cyberax 7 hours ago
          You can request 6-day certificates from Let's Encrypt. There's a clear path towards 24-hour certificates. This will be pretty much equivalent to the current status quo with the OCSP stapling.
  • sugarpimpdorsey 12 hours ago
    This will not impact Chrome in any meaningful way because - in typical Google fashion - they invented their own bullshit called CRLSets that does not perform OCSP or CRL checks in any way, rather periodically downloads a preened blacklist from Google which it then uses to screen certificates.

    Most people don't realize this.

    It's quite insane given that Chrome will by default not check CRLs *at all* for internal, enterprise CAs.

  • EPWN3D 8 hours ago
    Good. OCSP sucks. It's a fail-open design, and the fact that it exists means that a lot of security people have developed an auto-response for certificate lifetime problems, even in domains where OCSP is totally infeasible, like secure boot.

    I can patiently explain why a ROM cannot query a fucking remote service for a certificate's validity, but it's a lot easier to just say "Look OCSP sucks, and Let's Encrypt stopped supporting it", especially to the types of people I argue with about these things.

  • jart 9 hours ago
    Good riddance. That was always a power grab if I ever saw it. They should take it out of browsers too.
  • zahlman 12 hours ago
    Does this mean I should turn "security.OCSP.require" back off in Firefox?
  • GauntletWizard 14 hours ago
    Ocsp has always represented a terrible design. If clients require it, then it becomes just a override on the not after date included in the certificate, that requires online access to the cert server. If it is not required, then it is useless, because blocking the ocsp responses is well within the capabilities of any man in the middle attack, and makes the servers themselves DDOS attack targets.

    The alternative to the privacy nightmare is ocsp stapling, which has the first problem once again - it adds complexity to the protocol just to add an override of the not after attribute, when the not after attribute could be updated just as easily with the original protocol, reissuing the certificate. It was a Band-Aid on the highly manual process of certain issuance that once dominated the space.

    Good riddance to ocsp, I for one will not miss it.

    • tgsovlerkhgsel 12 hours ago
      Shortening the certificate lifespan to e.g. 24h would have a number of downsides:

      Certificate volume in Certificate Transparency would increase a lot, adding load to the logs and making it even harder to follow CT.

      Issues with domain validation would turn into an outage after 24h rather than when the cert expires, which could be a benefit in some cases (invalidating old certs quickly if a domain changes owner or is recovered after a compromise/hijack).

      OCSP is simpler and has fewer dependencies than issuance (no need to do multi-perspective domain validation and the interaction with CT), so keeping it highly available should be easier than keeping issuance highly available.

      With stapling (which would have been required for privacy) often poorly implemented and rarely deployed and browsers not requiring OCSP, this was a sensible decision.

      • tptacek 12 hours ago
        Well, OCSP is dead, so the real argument is over how low certificate lifetimes will be, not whether or not we might make a go of OCSP.
      • charcircuit 12 hours ago
        >would increase a lot

        You can delete old logs or come up with a way to download the same thing with less disk space. Even if the current architecture does not scale we can always change it.

        >even harder to follow CT.

        It should be no harder to follow than before.

        • tgsovlerkhgsel 11 hours ago
          Following CT (without relying on a third party service) right now is a scale problem, and increasing scale by at least another order of magnitude will make it worse.

          I was trying to process CT logs locally. I gave up when I realized that I'd be looking at over a week even if I optimized my software to the point that it could process the data at 1 Gbps (and the logs were providing the data at that rate), and that was a while ago.

          With the current issuance rate, it's barely feasible to locally scan the CT logs with a lot of patience if you have a 1 Gbps line.

          https://letsencrypt.org/2025/08/14/rfc-6962-logs-eol states "The current storage size of a CT log shard is between 7 and 10 terabytes". So that's a day at 1 Gbps for one single log shard of one operator, ignoring overhead.

          • integralid 2 hours ago
            > even if I optimized my software to the point that it could process the data at 1 Gbps

            Are you sure you did the math correctly? We're scanning CT at my work, and we do have scale problems, but the bottleneck is for database inserts. From your link, looks like a shard is 10TB and that's for a year of data

            Still insane amount and a scale problem, of course

        • lokar 12 hours ago
          You could extend the format to account for repetition of otherwise identical short ttl certs
    • jeroenhd 13 hours ago
      OCSP stapling was a good solution in the age of certificates that were valid for 10 years (which was the case for basic HTTPS certificates back in 2011 when OCSP stapling was introduced). In the age of 90 day certificates (to be reduced to a maximum of 47 days in a few years), it's not quite as necessary any more, but I don't think OCSP stapling is that problematic a solution.

      Certificates in air-gapped networks are problematic, but that problem can be solved with dedicated CRL-only certificate roots that suffer all of the downsides of CRLs for cases where OCSP stapling isn't available.

      Nobody will miss OCSP now that it's dead, but assuming you used stapling I think it was a decent solution to a difficult problem that plagued the web for more than a decade and a half.

      • tremon 13 hours ago
        But that 47-day lifetime is enforced by the certificate authority, not by the browser, right? So a bad actor can still issue a multi-year certificate for itself, and in the absence of side-channel verification the browser is none the wiser. Or will browsers be instructed to reject long-lived certificates under specific conditions?
        • sugarpimpdorsey 12 hours ago
          Wrong. Enforcement is done by the browser. Yes, a CA's certificate policy may govern how long a certificate they will issue. But should an error occur, and a long-lived cert issued (even maliciously), the browser will reject it.

          The browser-CA cartels stay relatively in sync.

          You can verify this for yourself by creating and trusting a local CA and try issuing a 5 year certificate. It won't work. You'll have a valid cert, but it won't be trusted by the browser unless the lifetime is below their arbitrary limit. Yet that certificate would continue to be valid for non-browser purposes.

          • ameliaquining 11 hours ago
            I just did this with a 20-year certificate and it worked fine in Chrome and Firefox. That said, my understanding is that the browsers exempt custom roots from these kinds of policies, which are only meant to constrain the behavior of publicly trusted CAs.
            • sugarpimpdorsey 8 hours ago
              Safari enforces a hard limit of just over two years.
        • arccy 13 hours ago
          the browsers will verify, and every cert will be checked against transparency logs. you won't be able to hide a long lived cert for very long.
        • avianlyric 13 hours ago
          > So a bad actor can still issue a multi-year certificate for itself, and in the absence of side-channel verification the browser is none the wiser.

          How would a bad actor do that without a certificate authority being involved?

    • layer8 13 hours ago
      > the not after attribute could be updated just as easily with the original protocol, reissuing the certificate.

      That's not a viable solution if the server you want to verify is compromised. The point of CRL and OCSP is exactly to ask the authority one higher up, without the entity you want to verify being able to interfere.

      In non-TLS uses of X.509 certificates, OCSP is still very much a thing, by the way, as there is no real alternative for longer-lived certificates.

      • arccy 13 hours ago
        actually that's pretty close to where we're going with ever shorter certificate lifetimes...
      • GauntletWizard 11 hours ago
        In this scenario, where oscp is required and stapled: The CA can simply refuse to reissue the certificate if the host is compromised. It does not matter if it is refusing to issue an ocsp ticket or a new short lived cert.