I think certificates for IP addresses can be useful.
However, if Let‘s encrypt were to support S/MIME certificates, it would have a far greater impact. Since a few years, we have an almost comical situation with email encryption: Finally, most important mail user agents (aka mail clients) support S/MIME encryption out of the box. But you need a certificate from a CA to have a smooth user experience, just like with the web. However, all CAs that offer free trustworthy¹ S/MIME certificates with a duration of a year or more² have disappeared. The result: No private entities are using email encryption.
(PGP remains unused outside of geek circles because it is too awkward to use.)
Let‘s encrypt our emails!
¹ A certificate isn‘t trustworthy if the CA generates the secret key for you.
² With S/MIME you need to keep your old certificates around to decrypt old mails, so having a new one frequently is not practical
> ² With S/MIME you need to keep your old certificates around to decrypt old mails, so having a new one frequently is not practical
You don't need to change your decryption key - the new certificate can use the same decryption keys as the old one (certbot even has a flag: --reuse-key). Whether this is a good idea or not is a separate question.
I think the biggest benefit would be ACME-like automatic certificate issuance. Currently getting a new certificate is just too much friction.
The other thing I would hope for is wildcard certificates. I stopped using S/MIME because I usually create a new email (based on the same domain) for each vendor that I deal with. I would find it useful to be able to get a single certificate covering all email with that domain. Obviously that does mean that anyone else using an email from that domain would have to share the certificate, but for private use that can be acceptable - I don't worry about my wife reading my currently unencrypted email!
PGP has WKD[1] (Web Key Directory) now if you want to use the TLS web of trust for email. TLS certificates are much easier to get than S/MIME certificates. Having a third party do the identity management can be good, but many people do not work for a company where that makes sense. ... and if you do work for such a company, it is better to do the identity management within the company.
I am currently appalled at how little of a wakeup call Signalgate 1.0 was and is[2]. Signalgate was, yet again, a failure of identity management in end to end encryption. You know, the exact thing that S/MIME certificates (or WKD) could help solve in a government environment if the resulting system was actually usable.
The first two major points they pose against email can be summed up as ‘people don’t use security unless it is by default, and because it wasn’t built-in to email we shouldn’t try.’ To which I respond with: perfect is the enemy of progress. Clearly, email is sticky (many other things have tried to replace it), and it has grown to do more than just send plaintext messages. People use it for document transfer, agreements, as a way to send commands over the internet, etc. Email encryption and authentication is simply an attempt to add some cryptographic tooling to a tool we already use for so many things. Thus, these points feel vacuous to me.
The last two points are less to do with email and more to do with encryption in general, and it is probably the most defeatist implication of the fact that there is no ‘permanent encryption.’ It is an argument against encryption as a whole, and paints the picture for me that the author would find other reasons to dislike email encryption because they already dislike encryption. These last two points are an extension of wanting an ideal solution and refusing to settle for anything less.
A beautiful vision, but not practically viable. The average user isn't ready to handle private keys -- many can barely be trusted with their email passwords.
This means you either need centrally issued certificates for each domain, or face situations where legitimate users fail to obtain certificates, while cyber criminals send S/MIME-signed emails on the users' behalf.
Once a few generations of users have been trained to use passkeys then we can consider letting users handle key pairs.
I know little about s/mime encryption. But why do we need to decrypt old emails with the same protocol? In my head, I imagine certs would be for transport, and your server or host should handle encryption at rest no? So short lived transport certs, and whatever storage encryption you want. What am I missing here?
S/MIME is about the mail (content) itself, not the transport. For the transport part there are things like (START)TLS and MTA-STS. With S/MIME you include your certificate in the mail and can either sign the mail with a signature (with your private key, others can verify it using your public key from the certificate) or encrypt the mail (with the receiver's public key, so only he can decrypt it using his private key). Certificate trust is determined normally via the CA chain and trusted CAs.
This is totally off-topic. With all respect, if you want to discuss email encryption I would suggest that you write a blog post or start a separate thread.
So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
If the addressing bodies are not on board then it's a user responsibility to validate the host header and reject unwanted IP address based connections until any legacy certs are gone / or revoke any legacy certs. Or just wait to use your shiny new IP?
I wonder how many IP certs you could get for how much money with the different cloud providers.
>So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
>Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
I don't see how this is any different than custom/vanity domains you can get from cloud providers. For instance on azure you can assign a DNS name to your VMs in the form of myapp.westus.cloudapp.azure.com, and CAs will happily issue certificates for it[1]. There's no cooloff for those domains either, so theoretically someone else can snatch the domain from you if your VM gets decommissioned.
There is in fact weird cool off times for these cloud resources. I’m less familiar with AWS, but I know in azure once you delete/release one of these subdomains, it remains tied to your organization/tenant for 60 or 90 days.
You can reclaim it during that time, but any other tenant/organization will get an error that the name is in use. You can ping support to help you there if you show them you own both organizations. I was doing a migration of some resources across organizations and ran into that issue
This is why Azure now uses a unique hash appended to the hostname by default (can be changed if desired.) You can't attack a dangling CNAME subdomain record if it points to a hashed-appended hostname, and Azure allows you to control the uniqueness either globally/tenant/region so you can have common names in that tenant/region if you wish.
The attack here isn't "snatch a domain from somebody who previously got a cert for it", it's "get an IP, get a cert issued, then release it and let somebody interesting pick it up".
In practice, this seems relatively low risk. You'd need to get a certificate for the IP, release it, have somebody else pick it up, have that person actually be doing something that is public-facing and also worth MITMing, then hijack either BGP or DNS to MITM traffic towards that IP. And you have ~6 days to do it. Plus if you can hijack your target's DNS or IPs... you can just skip the shenanigans and get a valid fresh cert for that target.
> So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right?
My guess is that it's going to be approached the other way around. After all, it's not the ISPs' job to issue IP addresses in conformance with TLS; it's a TLS provider's job to "validate identity" — i.e. to issue TLS certificates in conformance with how the rest of the ecosystem attaches identity to varyingly-ephemeral resources (IPs, FQDNs, etc.)
The post doesn't say how they're going to approach this one way or the other, but my intuition is that LetsEncrypt is going to have/maintain some gradually-growing whitelist for long-lived IP-issuer ASNs — and then will only issue certs for IPs that exist under those ASNs; and invalidate IP certs if their IP is ever sold to another ASN not on the list. This list would likely be a public database akin to Mozilla's Public Suffix List, that LetsEncrypt would expect to share (and possibly co-maintain) with other ACME issuers that want to do IP issuance.
You can renew your HTTPS certificate for 90 days the day before your domain expires. Your CA can't see if the credit card attached to your auto renewal has hit its limit or not.
I don't think the people using IP certificates will be the same people that abandon their IP address after a week. The most useful thing I can think of is either some very weird legacy software, or Encrypted Client Hello/Encrypted SNI support without needing a shared IP like with Cloudflare. The former won't drop IPs on a whim, the latter wouldn't succeed in setting up a connection to the real domain.
> When the subjectAltName extension contains an iPAddress, the address MUST be stored in the octet string in "network byte order", as specified in [RFC791]. The least significant bit (LSB) of each octet is the LSB of the corresponding byte in the network address. For IP version 4, as specified in [RFC791], the octet string MUST contain exactly four octets. For IP version 6, as specified in [RFC2460], the octet string MUST contain exactly sixteen octets.
That's 32 bits for IPv4 and 128 bits for IPv6. You can't store anything other than a single unicast IP address of either format with that much space.
Iirc AWS Application Load Balancers (HTTP/L7) will cycle through IPs as fast as every 30 minutes (based on me tracking them ~5 years ago). I think they set a 10 minute ttl on their DNS records
> So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
... or put multiple customers on the same IP address at the same time. But presumably they wouldn't be willing to complete the dance necessary to actually get certs for addresses they were using that way.
Just in case, though, it's probably a good idea to audit basically all software everywhere to make sure that it will not use IP-based SANs to validate connections, even if somebody does, say, embed a raw IP address in a URL.
There was a prior concern in the history of Let's Encrypt about hosting providers that have multiple customers on the same server. In fact, phenomena related to that led to the deprecation of one challenge method and the modification of another one, because it's important that one customer not be able to pass CA challenges on behalf of another customer just because the two are hosted on the same server.
But there was no conclusion that customers on the same server can't get certificates just because they're on the same server, or that whoever legitimately controls the default server for an IP address can't get them.
This would be a problem if clients would somehow treat https://example.com/ and https://96.7.128.175/ as the same identifier or security context just because example.com resolves to 96.7.128.175, but I'm not aware of clients that ever do so. Are you?
If clients don't confuse these things in some automated security sense, I don't see how IP address certs are worse than (or different from) certs for different customers who are hosted on the same IP address.
The way in which they are worse is that IP addresses are often unstable and shuffled around since generally the end user of the address is not its owner. It would be similar to getting a cert for myapp.github.io, technically perfectly valid but GitHub can add any moment steal the name from you since they are the owner, not you
That's a significant distinction. In partial mitigation of this, Let's Encrypt will issue IP address certificates valid for only 6 days (not 90 days or 365 days or any other period).
> They'll only be available under the shortlived profile (which has a 6-day validity period)
Yes (if the TLS-ALPN-01 challenge method was used). The CA/B Forum Baseline Requirements currently permit proof of control using any of four specified ports
> Authorized Ports: One of the following ports: 80 (http), 443 (https), 25 (smtp), 22 (ssh).
Let's Encrypt uses only port 80 and 443.
This is the same for certificates for domain names and for IP addresses.
Perhaps I didn't make myself clear. I don't think that IP certs will end up getting issued for shared servers, and definitely not in a way where tenants can impersonate one another. Not often enough to worry about, anyway.
The point was that it affects the utility of the idea.
... and don't get me started on those "challenge methods". Shudder. You'll have me ranting about how all of X.509 really just needs to be taken out and shot. See, I'm already doing it. Time for my medication...
I recall one of the main arguments against Encrypted server name indication (ESNI) is that it would only be effective for the giant https proxy services like Cloudflare, that the idea of IP certs was floated as a solution but dismissed as a pipe dream. With IP address certificates, now every server can participate in ESNI, not just the giants. If it becomes common enough for clients to assume that all web servers have an IP cert and attempt to use ESNI on every connection, it could be a boon for privacy across the internet.
4. Send ESNI, get separate cert for www.secret.com, validate that
... and the threat you're mitigating is presumably that you don't want to disclose the name "www.secret.com" unless you're convinced you're talking to the legitimate 1.2.3.4, so that some adversary can't spoof the IP traffic to and from 1.2.3.4, and thereby learn who's making connections to www.secret.com. Is that correct?
But the DNS resolution step is still unprotected. So, two broad cases:
1. Your adversary can subvert DNS. In this case IP certificates add no value, because they can just point you to 5.6.7.8, and you'll happily disclose "www.secret.com" to them. And you have no other path to communicate any information about what keys to trust.
2. Your adversary cannot subvert DNS. But if you can rely on DNS, then you can use it as a channel for key information; you include a key to encrypt the ESNI for "www.secret.com" in a DNS record. Even if the adversary can spoof the actual IP traffic to and from 1.2.3.4, it won't do any good because it won't have the private key corresponding to that ESNI key in the DNS. And those keys are already standardized.
So what adversary is stopped by IP certificates who isn't already stopped by the ESNI key records in the DNS?
Sure, I agree, the next increment in privacy comes with using DoT/DoH (in fact some browsers require this to use ESNI at all). Probably throw in DNSSEC next. Having IP certs is just one more (small) step in that direction.
> you include a key to encrypt the ESNI for "www.secret.com" in a DNS record
I've never heard of this, is this a thing that exists today? (edited to remove unnecessary comment)
>I've never heard of this, is this a thing that exists today? Are you arguing against one small step in a series of improvements by using a nonexistent hypothetical as evidence that the small step is unnecessary?
> Another Internet Draft incorporates a parameter for transmitting the ECH public keys via HTTPS and SVCB DNS record types, shortening the handshake process.[24][25]
The point is in not showing the watching adversary any DNS names at all. You do DoH, you do the IP cert, you enter TLS before naming any names. The www.secret.com is never visible in plaintext.
Only helpful if the IP itself is not incriminating or revealing, that is, it's an IP from a large pool like Cloudflare, GCP, AWS, etc.
To my mind, it's much more interesting to verify that an address like 1.1.1.1 or 8.8.8.8 is what it purports to be, but running UDP DNS over TLS is still likely not practical, and DoH already exists, so I don't see how helpful is it here.
Plenty of other responses with good use cases, but I didn’t see NTS mentioned.
If you want to use NTS, but can’t get an IP cert, then you are left requiring DNS before you can get a trusted time. If DNS is down- then you can’t get the time. A common issue with DNSSEC is having the wrong time- causing validation failures. If you have DNSSEC enforced and have the wrong time- but NTS depends on DNS, then you are out of luck with no way to recover. Having IP as part of your cert allows trusted time without the DNS requirement, which can then fix your broken DNSSEC enforcement.
Essentially: keep some minimum values for time. Then do a single HTTPS request, ignore the validation of the certificate's date to start with, but use the Date header to later validate it against minimum / maximum. This has the advantage it's still a HTTPS request, so can't be MiTM'd and depending on implementation it can validate the time quite well (even if the device has run out of power it can have saved a recent timestamp on disk, so with regular use of the device an old certificate won't be valid, keeping the main useful property of certificates having validity periods).
I don't believe it does this, but you could do this without DNS as 8.8.8.8, etc already have IP address certificates:
It would need a custom tool though, as curl only has --insecure, not a way to avoid just the notBefore / notAfter validation of the cert.
(This is not the only thing to use this technique, OpenBSD's ntpd has a way to contrain time based on HTTP headers: https://man.openbsd.org/ntpd.conf#CONSTRAINTS -- the default ntpd.conf ships with Quad9 configured via IP address.)
Oh this is a good point! Looking at my DNSSEC domain (hosted by CloudFlare) on https://dnssec-debugger.verisignlabs.com - the Inception Time and Expiration Time seems to be valid for... 3.5 days? This isn't something I look at much, but I assume that is up to the implementation. The new shortlived cert is valid for 6 days. So, from a very rough look, I expect X.509 certificate is going to be less time sensitive then DNSSEC - but only by a few days. Also, very likely to depend on implementation details. This is a great point.
this seems possible to avoid as an issue without needing IP certs by having the configuration supply both an IP and a hostname, with the hostname used for the TLS validation.
Yes, that is absolutely possible, but doesn't mean that will be the default. I commented recently [0] about Ubuntu's decision to have only NTS enabled (via domain) by default on 25.10. It begs the question how system time can be set if the initial time is outside of the cert's validity time-frame. I didn't look, but perhaps Chrony would still use the local network's published NTP servers.
Sometimes you want to have valid certs while your dns is undergoing major redesign. For instance to keep your dashboards available, or to be triple sure no old automation will fail due to dns issues.
In other cases dns is just not needed at all. You might prefer simplicity, independence from dns propagation, so you will have your, say, Cockpit exposed instantly on a test env.
Identify a service directly by its crypto key. When you configure something else to connect to it, treat the IP address as a hint, not the primary identifier for what it's talking to. Standard idiom.
... and before you tell me that that's infeasible because you'd have to modify software, go do a survey of all the code out there, and see how much of it supports IP address certificates. If you're moving around the parts of some big complex system, it's pretty much guaranteed that many of those parts are going to choke if you just blindly go and stick IP addresses in https:// URLs.
And if you're fixing the software anyway, then it's not sane to "fix" it to attach identity to something you're going to want to change all the time, like an IP address. Especially if they're global addresses (which are the only ones any Let's Encrypt or any other public CA is ever going to certify) in the IPv4 space (which is the only one any "enterprise" ever seems willing to use).
The BSD networking stack treats an IP addr as a valid hostname for hostname resolution. As such, every phone, tablet, and computer able to do TLS by hostname can do it by IP. Try it out! Self-sign an IP certificate and try it on your local net. If you put it in the trust store, it’ll validate just fine. The only barrier to adoption was CAs refusing to issue IP certificates at large.
Noot quite. DNS hostnames and IP addresses are encoded differently in X.509 certs: one is the dNSName option of the GeneralName choice type in the subjectAltName extension[1], the other is the iPAddress option. (And before you ask, tagging a stringified IP address quad as a dNSName is misissuance per the CA/Browser Forum Baseline Requirements[2] and liable to get your CA kicked from certificate stores. Ambiguous encodings are dangerous.) So some explicit support from the TLS library is indeed required. But I’m indeed not aware of many apps having problems with IP address certs.
Um, the BSD networking stack I'm familiar with doesn't include TLS or X.509 validation at all. The question isn't what you get from gethostbyname. It's what you get when you hand that to your X.509 validator.
> go do a survey of all the code out there, and see how much of it supports IP address certificates.
I've been doing that for years on onprem (~60% old "enterprise/legacy" ~40% modern stuff) and never seen anything that doesn't support it. YMMV, but if all I work with supports it, I won't complain in vain.
> those parts are going to choke if you just blindly go and stick IP addresses in https:// URLs.
I did many times, seems that legacy heavens were always kind to me in this regard.
> something you're going to want to change all the time, like an IP address
That's a personal assumption as well. If your architectures change IPs all the time, OK. Ones I worked with didn't. Always had plenty of components with IPs that didn't changed in a decade or two. Even my two previous local ISPs I had gave me "dynamic public IP" and kept them for many years. For some companies changing an ip of their main firewalls/load balancers or VPN servers is unthinkable.
Even on my last project on Public Cloud, the first thing I did was to make sure public IPs won't be dynamic (will survive recreation of services) so I don't have to deal with consequences of my corporate client endpoints and proxies flushing DNS caches randomly. (don't ask me why, but even huge companies still use proxies on a large scale. Good luck with figuring out when such proxy invalidates your DNS record).
> in the IPv4 space
IPv6 is here. Your printer and light bulb will want a cert as well.
It might be interesting for "opportunistic" DoTLS towards authdns servers, which might listen on the DoTLS port with a cert containing a SAN that matches the public IP of the authdns server. (You can do this now with authdns server hostnames, but there could be many varied names for one public authdns IP, and this kinda ties things together more-clearly and directly).
It might also he useful to hide the SNI in HTTPS requests. With the current status of ESNI/ECH you need some kind of proxy domain, but for small servers that only host a few sites, every domain may be identifiable (as opposed to, say, a generic Cloudflare certificate or a generic Azure certificate).
One use-case is connecting to a DoT (DNS-over-TLS) server directly rather than using a hostname. If you make a TLS connection to an IP address via OpenSSL, it will verify the IP SAN and fail if it's not there.
What about internal IPv4 addresses? Can we have browsers ignore 192.168.x.x, 172.16.x.x and 10.x.x.x if we can't get certs for those or can we get a public wildcard for internal networks?
I don’t believe that makes sense in the context of certificates generated by a public CA. Unlike domain names, there’s not one owner of 10.10.10.10, there are millions of “owners”…
But what problem is it that you want to solve?
For local development, one can use a tool such as mkcert. For shared internal resources (e.g. within a company), it’s probably easier to use a TLS cert tied to a domain instead of using naked IP addresses.
Every time I open a browser I need to click two buttons to get past the certificate error. Sure I could configure a real domain, do split DNS and get a certificate but these cameras require manual uploading a certificate. I would need to do this every three months for every camera and eventually even more frequently.
If you can somehow prove that your device can manage to have a private key that will never be extractable then you should already be able to do that with any regular CA.
The problem with certificates for internal addresses is that every single time someone tries to pull it off, it doesn't take long for someone to buy one of those devices, extract the private key, and then post about it online, requiring the key to be revoked immediately.
There is a solution to that, of course. If you trust your device, import its certificate manually so you can visit the page without errors, or if you have a lot of devices, set up a certificate authority to distribute these certificates. There are open source ACME servers that'll let you publish certificates the exact same way you'd do with Let's Encrypt, except now you can keep everything local.
Point your dns to an existing public server, get a cert, copy it to internal server, point your dns to 192.168.... address and copy the cert and key over.
Only problem is some routers blackhole dns responses pointing to local addresses so you need to test it
With automated certs having shorter and shorter expiration this becomes a tedious waste of time just so one can access ones cameras without having to click past the browser warnings.
Nice, another exploit for TLS. The previous ones were all about generating valid certs for a domain you don't own. This one will be for generating a cert for an IP you don't own. The blackhats will be hooting and hollering on telegram :)
No but maybe yes:
It would be impossible, and undesirable to issue certificates for local addresses. There's no way to verify local addresses because, inherently, they're local and not globally routable.
However, if a router manufacturer was so inclined, they _could_ have the device request a certificate for their public IPv4 address, given that it's not behind CG-NAT. v6 should be relatively easy since (unless you're at a cursed ISP) all v6 is generally globally routable.
Even behind CGNAT, you could probably get away with DNS here. If you provide your customers with customeraccount.manufacturerrouters.com, you can then use DNS validation to get a valid certificate for *.customeraccount.manufacturerrouters.com. Put a record in there that points to the local router IP (I.E. settings.customeraccount.manufacturerrouters.com) and you can get HTTPS logins on your local network, even with local IP addresses if the CAB still allows that.
It's not exactly user friendly, but it'll work.
Personally, I have a private CA that I use. My home router has a domain name pointing towards it and has been loaded up with my private certificate. I get the certificate error once a year when the thing expires but in the mean time I can access my router securely.
No and it shouldn’t. You can just run a proxy with a real domain and a real cert and then use dns rewrites to point that domain to a local host
For example you can use nginx manager if you want a ui and adguard for dns. Set your router to use adguard as the exclusive dns. Add a rewrite rule for your domain to point to the proxy. Register the domain and get a real cert. problem solved
No, they won't issue a certificate for a private IP address because you don't have exclusive control over it (i.e., the same IP address would point to a different machine on someone else's network).
No, on the contrary. You can't get a valid certificate for non-global IP, but you can already get a certificate for a domain name and point it to 192.168.0.1.
Cloudflare DNS (probably others as well) allows you to enter private IPs for subdomains, so you don't have to run your own DNS. There's no AXFR enabled, so no issues with privacy unless you have someone really determined to dictionary-attack your subdomains.
Then I realised that when my internet was down, 192-18-1-1.foo.com wouldn't resolve. And when my internet is down is exactly when I want to access my router's admin page.
I decided simply using unencrypted HTTP is a much better choice.
You don't even need to, mDNS has been enabled by default by most devices for ages now. You'll have to look up what the name is your manufacturer chose (if you use Windows, you van usually hit the network explorer tab and it'll be right in there, don't know about other OSes). It'll even work if IPv4 is broken (if you ran out of DHCP leases or whatever) because it almost always natively runs on IPv6 too.
I could start running my own DNS server, and start manually curating all the important entries in it, sure.
Or I could just use HTTP, or a self-signed certificate. If an attacker intercepts traffic on twenty feet of ethernet cable in my home's walls, I've probably got bigger problems than protecting my router admin password.
Have you found much trouble with clients that can't cope without CN? Is this one of those situations where anything that can't cope is also hopeless for other reasons (e.g. can't speak TLS 1.2, doesn't understand IPv6, that sort of thing) and so you can tell people you're not their biggest problem ?
I guess a bunch of "roll your own X.509 validation"-logic will have that bug, but to exploit it you need a misbehaving CA to issue you such a cert (i.e. low likelihood)
Having DNS available wouldn't be any more "proof". The person applying gets to choose which form of proof will be provided, so adding more options can only ever make it easier to "prove" things.
I don't happen to know if it's actively in use, or whether any of the technical implementation details were formally standardized, but one obvious thing goes like this:
1. Write a DNS record for CAA the Certificate Authority Authorization for your names
2. In the CAA record, say that you forbid anybody except your chosen CA to issue. Competent CAs will obey this instruction (obeying it is mandated by the root trust stores, there are bugs of course but on the whole compliance is very good and is separate from their implementation of specific validation methods)
3. Further, indicate, either in this record or by agreement with your chosen CA, that they must use DNS proof of control. This might be something very nerdy like indicating a specific OID for the method in the CAA record or it might be a Memorandum of Understanding somebody signed and then they went out for a nice lunch.
The announcement is https://letsencrypt.org/2025/01/16/6-day-and-ip-certs/. I don't think it's more complicated than: there exist services that for one reason or another don't have a domain name and are instead accessible by a public static IP address, and they need TLS certificates for security, and other CAs offer this, so Let's Encrypt should too. Is there any specific reason why they shouldn't?
Hmm. Absolutely no explanation of why there's a need. Given only that announcement, I'd have to assume that the reason is "because we can".
So the first reason not to do it is that you never want to change software without a good reason. And none of the use cases anybody's named here so far hold water. They're all either skill issues already well addressed by existing systems, or fundamental misunderstandings that don't actually work.
Changing basic assumptions about naming is an extra bad idea with oak leaf clusters, because it pretty much always opens up security holes. I can't point to the specific software where somebody's made a load-bearing security assumption about IP address certificates not being available (more likely a pair of assumptions "Users will know about this" and "This can't happen/I forgot about this")... but I'll find out about it when it breaks.
Furthermore, if IP certificates get into wide use (and Let's Encrypt is definitely big enough to drive that), then basically every single validator has to have a code path for IP SANs. Saying "you don't have to use it" is just as much nonsense as saying "you don't have to use IP". Every X.509 library ends up with a code path for IP SANs, and it effectively can't even be profiled out. Every library is that much bigger and that much more complicated and needs that much more maintenance and testing. It's a big externalized cost. It would better to change the RFCs to deprecate IP SANs; they never should have been standardized to begin with.
It also encourages a whole bunch of bad practices that make networks brittle and unmaintainable. You should almost never see an IP address outside of a DNS zone file (or some other name resolution protocol). You can argue that people shouldn't do stupid things like hardwiring IP addresses even if they're given the tools... but that's no consolation to the third parties downstream of those stupid decisions.
... and it doesn't even work for all IP addresses, because IP addresses aren't global names. So don't forget to special-case the locally administered space in every single piece of code that touches an X.509 certificate.
TLS certificates for IP addresses are already a thing that exists. You can, for instance, go to https://1.1.1.1 in your browser right now (it used to actually serve the HTML from there but now it's a redirect). If that doesn't work in a given TLS client, this will be treated as a bug in that client, and rightly so. The genie is out of the bottle; nobody is going to remove support for things that work today just because it'd be slightly cleaner. So TLS clients are already paying the maintainability costs of supporting IP address certificates; this isn't a new change.
I'm not sure why private IP addresses would need to be treated differently other than by the software that issues certs for publicly trusted CAs (which is highly specialized and can handle the few extra lines of code, it's not a big cost for the whole ecosystem). Private CAs can and do issue certs for private IP addresses.
> TLS certificates for IP addresses are already a thing that exists.
Still not wide use. It's when it gets into wide use that you end up having to include it in everything.
For now, it's a parlor trick, and it's a parlor trick that shouldn't work.
> nobody is going to remove support for things that work today just because it'd be slightly cleaner.
Work, but shouldn't and aren't actually used except by crazy people.
> If that doesn't work in a given TLS client, this will be treated as a bug in that client, and rightly so.
I've tried to use TLS on microcontrollers that barely had the memory to parse X.509 at all. Including stuff just because you can doesn't make that better.
... and I'm not going to go check the relevant RFCs, but I very much doubt that IP SANs are listed as a MUST. If I'm wrong, well, that's still a bug in the RFCs.
> Also, how would DoH or DoT work without this?
Hardwired keys for your trusted resolvers. Given that the whole CA infrastructure long ago gave up on doing any really robust verification of who was asking for a cert, making your DNS dependent on X.509 is a bad idea anyway. But if you really want to do it even though it's a bad idea, you can also bootstrap via the local DNS resolver and then connect to your DoH/DoT server using a domain name.
DoH, of course, is a horrible idea in itself, but that's another can of worms.
I could see the opposite argument: domain names who knows, someone could steal it or hack the registrar, registrar could be evil, DNS servers could be untrusted and/or evil or MITM'd... connecting to an IP you're engineering out entire classes of weaknesses in the scheme.
I'm sorry, but how is "Require validation of DNSSEC (when present) for CAA and DCV Lookups" related to issuing X.509 certs that include IP address SANs? I don't see any connection, and I didn't spot anything about it on a quick skim of the comments.
Anything from people who are afraid of increasingly onerous DNS requirements to breakage because they can't fix their parent domains DNSSEC misconfiguration. It seems like an interesting timing coincide to me so I wonder if there's some (ir)rational explanation. (Implementing a new SAN that must inherently have the gap you are finally addressing is not a bit funny to you?)
All that regex does is split an IPv6 address into groups of 4 digits, joins them with ":", and collapses any sequence of ":0000:" to "::". I don't see anything problematic with it.
Which is an error. Any ip like 2001:0000:0000::1 is going to be incorrect. It willingly produces errors. Whoever wrote this didn't even spend a few seconds thinking about the structure of IPv6 addresses.
> I don't see anything problematic with it.
Other than it being completely wrong and requiring a regex to be compiled for an amount of work that's certainly less than the compilation itself.
It only operates on a 32 digit IPv6 address so it won't already be abbreviated. My phrasing was inexact. It replaces only the first sequence of any number of ":0000:" to "::".
Which has one too many parts and doesn't parse as an IPv6 address. But like mentioned this is just presentation code. I don't want to waste time if this isn't actually a bug, but maybe someone on the LetsEncrypt trial could actually make a cert to see if IP addresses formatted like that are a problem in reality...
> Other than it being completely wrong and requiring a regex to be compiled for an amount of work that's certainly less than the compilation itself.
It's not. And the sequence you describe is not even parsed because colons are not part of the IPv6 extension of the SAN. PLease educate yourself before spilling such drivel.
Unless you see a glaring issue I don't: I think you are getting the causality wrong there. You "Yikes" because of your discomfort and lack of practice with regexes.
> You "Yikes" because of your discomfort and lack of practice with regexes.
That's exceptionally presumptions to the point of being snotty.
> I think you are getting the causality wrong there.
Where did I imply causality? This was simply an occasion to look at the code. This is bad code. I would not pass this. What's your _justification_ for using a regex here?
> > The certificate code referenced here shows why
So what's the implication here, then?
> This is bad code.
Without justifying further I think we're on equal footing on the snottiness here (:
What's bad? Why not use regex here? It's not like they're using it to parse user-controlled HTML. Simple string transormations like this is a great use-case where the manual character iteration easily becomes inefficient and messy. And you may introduce bugs in the process (unicode length bugs are common).
Do you also avoid grep and sed without the -F flag in shell?
Usually you can just import the leaf self signed cert as a CA in your OS trust store and the problem goes away (assuming it has an IP SAN). Slightly tedious but you can issue the certs with long validity
Let me rephrase that: How is the CA supposed to know they didn't handshake with an attacker? All they have is the IP, there's no identity to check like with DNS.
I expect SAN in this case means Subject Alternative Name, not Storage Area Network.
Sigh... I wish people would use their words before trotting out possibly-ambiguous (or obscure) acronyms. It would help avoid confusion among readers who don't live and breathe the topic on the writer's mind.
There’s only one, and not really obscure, interpretation of this acronym in a technical forum post announcement from a TLS certificate authority, the context was sufficient.
I think certificates for IP addresses can be useful.
However, if Let‘s encrypt were to support S/MIME certificates, it would have a far greater impact. Since a few years, we have an almost comical situation with email encryption: Finally, most important mail user agents (aka mail clients) support S/MIME encryption out of the box. But you need a certificate from a CA to have a smooth user experience, just like with the web. However, all CAs that offer free trustworthy¹ S/MIME certificates with a duration of a year or more² have disappeared. The result: No private entities are using email encryption.
(PGP remains unused outside of geek circles because it is too awkward to use.)
Let‘s encrypt our emails!
¹ A certificate isn‘t trustworthy if the CA generates the secret key for you.
² With S/MIME you need to keep your old certificates around to decrypt old mails, so having a new one frequently is not practical
> ² With S/MIME you need to keep your old certificates around to decrypt old mails, so having a new one frequently is not practical
You don't need to change your decryption key - the new certificate can use the same decryption keys as the old one (certbot even has a flag: --reuse-key). Whether this is a good idea or not is a separate question.
I think the biggest benefit would be ACME-like automatic certificate issuance. Currently getting a new certificate is just too much friction.
The other thing I would hope for is wildcard certificates. I stopped using S/MIME because I usually create a new email (based on the same domain) for each vendor that I deal with. I would find it useful to be able to get a single certificate covering all email with that domain. Obviously that does mean that anyone else using an email from that domain would have to share the certificate, but for private use that can be acceptable - I don't worry about my wife reading my currently unencrypted email!
PGP has WKD[1] (Web Key Directory) now if you want to use the TLS web of trust for email. TLS certificates are much easier to get than S/MIME certificates. Having a third party do the identity management can be good, but many people do not work for a company where that makes sense. ... and if you do work for such a company, it is better to do the identity management within the company.
I am currently appalled at how little of a wakeup call Signalgate 1.0 was and is[2]. Signalgate was, yet again, a failure of identity management in end to end encryption. You know, the exact thing that S/MIME certificates (or WKD) could help solve in a government environment if the resulting system was actually usable.
[1] https://datatracker.ietf.org/doc/draft-koch-openpgp-webkey-s...
[2] https://articles.59.ca/doku.php?id=em:sg (my article)
Hell no. Email encryption should be left to rot.
https://www.latacora.com/blog/2020/02/19/stop-using-encrypte...
Oof, I don’t like this article much at all.
The first two major points they pose against email can be summed up as ‘people don’t use security unless it is by default, and because it wasn’t built-in to email we shouldn’t try.’ To which I respond with: perfect is the enemy of progress. Clearly, email is sticky (many other things have tried to replace it), and it has grown to do more than just send plaintext messages. People use it for document transfer, agreements, as a way to send commands over the internet, etc. Email encryption and authentication is simply an attempt to add some cryptographic tooling to a tool we already use for so many things. Thus, these points feel vacuous to me.
The last two points are less to do with email and more to do with encryption in general, and it is probably the most defeatist implication of the fact that there is no ‘permanent encryption.’ It is an argument against encryption as a whole, and paints the picture for me that the author would find other reasons to dislike email encryption because they already dislike encryption. These last two points are an extension of wanting an ideal solution and refusing to settle for anything less.
A beautiful vision, but not practically viable. The average user isn't ready to handle private keys -- many can barely be trusted with their email passwords.
This means you either need centrally issued certificates for each domain, or face situations where legitimate users fail to obtain certificates, while cyber criminals send S/MIME-signed emails on the users' behalf.
Once a few generations of users have been trained to use passkeys then we can consider letting users handle key pairs.
Maybe with a local passkey on device ?
I know little about s/mime encryption. But why do we need to decrypt old emails with the same protocol? In my head, I imagine certs would be for transport, and your server or host should handle encryption at rest no? So short lived transport certs, and whatever storage encryption you want. What am I missing here?
S/MIME is about the mail (content) itself, not the transport. For the transport part there are things like (START)TLS and MTA-STS. With S/MIME you include your certificate in the mail and can either sign the mail with a signature (with your private key, others can verify it using your public key from the certificate) or encrypt the mail (with the receiver's public key, so only he can decrypt it using his private key). Certificate trust is determined normally via the CA chain and trusted CAs.
How would CA verification work in this case?
This is totally off-topic. With all respect, if you want to discuss email encryption I would suggest that you write a blog post or start a separate thread.
So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
If the addressing bodies are not on board then it's a user responsibility to validate the host header and reject unwanted IP address based connections until any legacy certs are gone / or revoke any legacy certs. Or just wait to use your shiny new IP?
I wonder how many IP certs you could get for how much money with the different cloud providers.
>So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
>Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
I don't see how this is any different than custom/vanity domains you can get from cloud providers. For instance on azure you can assign a DNS name to your VMs in the form of myapp.westus.cloudapp.azure.com, and CAs will happily issue certificates for it[1]. There's no cooloff for those domains either, so theoretically someone else can snatch the domain from you if your VM gets decommissioned.
[1] https://crt.sh/?identity=westus.cloudapp.azure.com&exclude=e...
There is in fact weird cool off times for these cloud resources. I’m less familiar with AWS, but I know in azure once you delete/release one of these subdomains, it remains tied to your organization/tenant for 60 or 90 days.
You can reclaim it during that time, but any other tenant/organization will get an error that the name is in use. You can ping support to help you there if you show them you own both organizations. I was doing a migration of some resources across organizations and ran into that issue
This is why Azure now uses a unique hash appended to the hostname by default (can be changed if desired.) You can't attack a dangling CNAME subdomain record if it points to a hashed-appended hostname, and Azure allows you to control the uniqueness either globally/tenant/region so you can have common names in that tenant/region if you wish.
The difference is the directionality.
The attack here isn't "snatch a domain from somebody who previously got a cert for it", it's "get an IP, get a cert issued, then release it and let somebody interesting pick it up".
In practice, this seems relatively low risk. You'd need to get a certificate for the IP, release it, have somebody else pick it up, have that person actually be doing something that is public-facing and also worth MITMing, then hijack either BGP or DNS to MITM traffic towards that IP. And you have ~6 days to do it. Plus if you can hijack your target's DNS or IPs... you can just skip the shenanigans and get a valid fresh cert for that target.
AWS sets CAA records for their domains and thus you can’t issue certs for them
> So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right?
My guess is that it's going to be approached the other way around. After all, it's not the ISPs' job to issue IP addresses in conformance with TLS; it's a TLS provider's job to "validate identity" — i.e. to issue TLS certificates in conformance with how the rest of the ecosystem attaches identity to varyingly-ephemeral resources (IPs, FQDNs, etc.)
The post doesn't say how they're going to approach this one way or the other, but my intuition is that LetsEncrypt is going to have/maintain some gradually-growing whitelist for long-lived IP-issuer ASNs — and then will only issue certs for IPs that exist under those ASNs; and invalidate IP certs if their IP is ever sold to another ASN not on the list. This list would likely be a public database akin to Mozilla's Public Suffix List, that LetsEncrypt would expect to share (and possibly co-maintain) with other ACME issuers that want to do IP issuance.
I've not seen any indication at all in LetsEncrypt's announcements that this is the case. Can you say more about how you're deriving that intuition?
You can renew your HTTPS certificate for 90 days the day before your domain expires. Your CA can't see if the credit card attached to your auto renewal has hit its limit or not.
I don't think the people using IP certificates will be the same people that abandon their IP address after a week. The most useful thing I can think of is either some very weird legacy software, or Encrypted Client Hello/Encrypted SNI support without needing a shared IP like with Cloudflare. The former won't drop IPs on a whim, the latter wouldn't succeed in setting up a connection to the real domain.
Even 1 day is enough for my use case, where I just want some tests to be done on a HTTPS url. All in all a great move.
You need a public IP for these tests? Otherwise that’s already quite easy with mkcert and a lot of our dev toolings.
> I wonder how many IP certs you could get for how much money with the different cloud providers.
I wonder if they'll offer wildcard certs at some point.
if you're talking about lets encrypt, they started offering wildcards in 2018
GP may have been talking about "wildcard IP certificates"
Or, even if they weren't, now I'm curious about that possibility too.
Doesn't seem like that's possible.
RFC5280 (https://datatracker.ietf.org/doc/html/rfc5280) defines these fields. Section 4.2.1.6 defines SANs, and specifies:
> When the subjectAltName extension contains an iPAddress, the address MUST be stored in the octet string in "network byte order", as specified in [RFC791]. The least significant bit (LSB) of each octet is the LSB of the corresponding byte in the network address. For IP version 4, as specified in [RFC791], the octet string MUST contain exactly four octets. For IP version 6, as specified in [RFC2460], the octet string MUST contain exactly sixteen octets.
That's 32 bits for IPv4 and 128 bits for IPv6. You can't store anything other than a single unicast IP address of either format with that much space.
Iirc AWS Application Load Balancers (HTTP/L7) will cycle through IPs as fast as every 30 minutes (based on me tracking them ~5 years ago). I think they set a 10 minute ttl on their DNS records
Even less - 60 second TTL.
> So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
... or put multiple customers on the same IP address at the same time. But presumably they wouldn't be willing to complete the dance necessary to actually get certs for addresses they were using that way.
Just in case, though, it's probably a good idea to audit basically all software everywhere to make sure that it will not use IP-based SANs to validate connections, even if somebody does, say, embed a raw IP address in a URL.
This stuff is just insanity.
There was a prior concern in the history of Let's Encrypt about hosting providers that have multiple customers on the same server. In fact, phenomena related to that led to the deprecation of one challenge method and the modification of another one, because it's important that one customer not be able to pass CA challenges on behalf of another customer just because the two are hosted on the same server.
But there was no conclusion that customers on the same server can't get certificates just because they're on the same server, or that whoever legitimately controls the default server for an IP address can't get them.
This would be a problem if clients would somehow treat https://example.com/ and https://96.7.128.175/ as the same identifier or security context just because example.com resolves to 96.7.128.175, but I'm not aware of clients that ever do so. Are you?
If clients don't confuse these things in some automated security sense, I don't see how IP address certs are worse than (or different from) certs for different customers who are hosted on the same IP address.
The way in which they are worse is that IP addresses are often unstable and shuffled around since generally the end user of the address is not its owner. It would be similar to getting a cert for myapp.github.io, technically perfectly valid but GitHub can add any moment steal the name from you since they are the owner, not you
That's a significant distinction. In partial mitigation of this, Let's Encrypt will issue IP address certificates valid for only 6 days (not 90 days or 365 days or any other period).
> They'll only be available under the shortlived profile (which has a 6-day validity period)
So in the case of multiple users behind a NAT, the cert for 96.7.128.175 would identify whichever party has control over the 443 port on that address?
Yes (if the TLS-ALPN-01 challenge method was used). The CA/B Forum Baseline Requirements currently permit proof of control using any of four specified ports
> Authorized Ports: One of the following ports: 80 (http), 443 (https), 25 (smtp), 22 (ssh).
Let's Encrypt uses only port 80 and 443.
This is the same for certificates for domain names and for IP addresses.
also, HTTPS certs are for in transit - so I see no reason why one certificate can't be used for all the Websites on the same server.
Perhaps I didn't make myself clear. I don't think that IP certs will end up getting issued for shared servers, and definitely not in a way where tenants can impersonate one another. Not often enough to worry about, anyway.
The point was that it affects the utility of the idea.
... and don't get me started on those "challenge methods". Shudder. You'll have me ranting about how all of X.509 really just needs to be taken out and shot. See, I'm already doing it. Time for my medication...
It’s bizarre that the CA/Browser forum with their draconian policy pronouncements is ok with this.
I can see how this would work on a technical level but what's the intended use case?
Just ESNI/ECH is a big deal.
I recall one of the main arguments against Encrypted server name indication (ESNI) is that it would only be effective for the giant https proxy services like Cloudflare, that the idea of IP certs was floated as a solution but dismissed as a pipe dream. With IP address certificates, now every server can participate in ESNI, not just the giants. If it becomes common enough for clients to assume that all web servers have an IP cert and attempt to use ESNI on every connection, it could be a boon for privacy across the internet.
So is this the flow?
1. Want to connect to https://www.secret.com.
2. Resolve using DNS, get 1.2.3.4
3. Connect to 1.2.3.4, validate cert
4. Send ESNI, get separate cert for www.secret.com, validate that
... and the threat you're mitigating is presumably that you don't want to disclose the name "www.secret.com" unless you're convinced you're talking to the legitimate 1.2.3.4, so that some adversary can't spoof the IP traffic to and from 1.2.3.4, and thereby learn who's making connections to www.secret.com. Is that correct?
But the DNS resolution step is still unprotected. So, two broad cases:
1. Your adversary can subvert DNS. In this case IP certificates add no value, because they can just point you to 5.6.7.8, and you'll happily disclose "www.secret.com" to them. And you have no other path to communicate any information about what keys to trust.
2. Your adversary cannot subvert DNS. But if you can rely on DNS, then you can use it as a channel for key information; you include a key to encrypt the ESNI for "www.secret.com" in a DNS record. Even if the adversary can spoof the actual IP traffic to and from 1.2.3.4, it won't do any good because it won't have the private key corresponding to that ESNI key in the DNS. And those keys are already standardized.
So what adversary is stopped by IP certificates who isn't already stopped by the ESNI key records in the DNS?
Sure, I agree, the next increment in privacy comes with using DoT/DoH (in fact some browsers require this to use ESNI at all). Probably throw in DNSSEC next. Having IP certs is just one more (small) step in that direction.
> you include a key to encrypt the ESNI for "www.secret.com" in a DNS record
I've never heard of this, is this a thing that exists today? (edited to remove unnecessary comment)
>I've never heard of this, is this a thing that exists today? Are you arguing against one small step in a series of improvements by using a nonexistent hypothetical as evidence that the small step is unnecessary?
see: https://en.wikipedia.org/wiki/Server_Name_Indication#Encrypt...
Thanks.
> Another Internet Draft incorporates a parameter for transmitting the ECH public keys via HTTPS and SVCB DNS record types, shortening the handshake process.[24][25]
[25]: Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings | https://datatracker.ietf.org/doc/draft-ietf-tls-svcb-ech/
DNSSEC is an integrity control, not a privacy control.
gp proposes a scenario where an integrity breach is lifted to a privacy breach, insisting on a strict distinction doesn't seem useful in this context.
I think it’s a fair aside. One doesn’t just “throw in a little DNSSEC” in a security discussion without extreme care.
The point is in not showing the watching adversary any DNS names at all. You do DoH, you do the IP cert, you enter TLS before naming any names. The www.secret.com is never visible in plaintext.
Only helpful if the IP itself is not incriminating or revealing, that is, it's an IP from a large pool like Cloudflare, GCP, AWS, etc.
To my mind, it's much more interesting to verify that an address like 1.1.1.1 or 8.8.8.8 is what it purports to be, but running UDP DNS over TLS is still likely not practical, and DoH already exists, so I don't see how helpful is it here.
Presumably you're encrypting DNS.
> If it becomes common enough for clients to assume that all web servers have an IP cert
That's never going to be a safe assumption; private and/or dynamically assigned IP addresses are always going to be a thing.
Plenty of other responses with good use cases, but I didn’t see NTS mentioned.
If you want to use NTS, but can’t get an IP cert, then you are left requiring DNS before you can get a trusted time. If DNS is down- then you can’t get the time. A common issue with DNSSEC is having the wrong time- causing validation failures. If you have DNSSEC enforced and have the wrong time- but NTS depends on DNS, then you are out of luck with no way to recover. Having IP as part of your cert allows trusted time without the DNS requirement, which can then fix your broken DNSSEC enforcement.
How are you going to validate an X.509 certificate if you don't know the time?
ChromeOS has a quite interesting design to do this: https://www.chromium.org/chromium-os/chromiumos-design-docs/...
Essentially: keep some minimum values for time. Then do a single HTTPS request, ignore the validation of the certificate's date to start with, but use the Date header to later validate it against minimum / maximum. This has the advantage it's still a HTTPS request, so can't be MiTM'd and depending on implementation it can validate the time quite well (even if the device has run out of power it can have saved a recent timestamp on disk, so with regular use of the device an old certificate won't be valid, keeping the main useful property of certificates having validity periods).
I don't believe it does this, but you could do this without DNS as 8.8.8.8, etc already have IP address certificates:
It would need a custom tool though, as curl only has --insecure, not a way to avoid just the notBefore / notAfter validation of the cert.(This is not the only thing to use this technique, OpenBSD's ntpd has a way to contrain time based on HTTP headers: https://man.openbsd.org/ntpd.conf#CONSTRAINTS -- the default ntpd.conf ships with Quad9 configured via IP address.)
Oh this is a good point! Looking at my DNSSEC domain (hosted by CloudFlare) on https://dnssec-debugger.verisignlabs.com - the Inception Time and Expiration Time seems to be valid for... 3.5 days? This isn't something I look at much, but I assume that is up to the implementation. The new shortlived cert is valid for 6 days. So, from a very rough look, I expect X.509 certificate is going to be less time sensitive then DNSSEC - but only by a few days. Also, very likely to depend on implementation details. This is a great point.
Practically, though, you rely on hardware time until you get network time.
Assuming your device gets an IP via DHCP, there is a solution that does not involve hard-coding IPs into software.
DHCP option 42 (defined in RFC 2132) can be used to specify multiple NTP server IPv4 addresses.
(There’s also DHCP option 4, but that’s used to specify the IP for the older RFC 868 time protocol.)
DHCPv6 has option 31 for SNTP (via deprecated RFC 4075), and option 56 for NTP (via RFC 5908).
So, that would probably be the best option: Get an NTP address from DHCP or DHCPv6, use that to set your clock, and then do whatever you need!
(Yes, it does require that you trust your DHCP source, and its NTP reference.)
this seems possible to avoid as an issue without needing IP certs by having the configuration supply both an IP and a hostname, with the hostname used for the TLS validation.
Yes, that is absolutely possible, but doesn't mean that will be the default. I commented recently [0] about Ubuntu's decision to have only NTS enabled (via domain) by default on 25.10. It begs the question how system time can be set if the initial time is outside of the cert's validity time-frame. I didn't look, but perhaps Chrony would still use the local network's published NTP servers.
[0]: https://news.ycombinator.com/context?id=44318784
Sometimes you want to have valid certs while your dns is undergoing major redesign. For instance to keep your dashboards available, or to be triple sure no old automation will fail due to dns issues.
In other cases dns is just not needed at all. You might prefer simplicity, independence from dns propagation, so you will have your, say, Cockpit exposed instantly on a test env.
Only our imagination limits us here.
So go to keys-are-names.
There's no reason AT ALL to bring IP addresses into the mix.
Consider Wireguard: it works at IP level, but gives you identity by crypto key. You can live without proper DNS in a small internal network.
(This obviously lives well without the IP certs under discussion.)
> So go to keys-are-names.
Elaborate, please.
> There's no reason AT ALL to bring IP addresses into the mix.
Not sure what scenario you are talking about, but IPs are kind of hard to avoid. DNS is trivial to avoid - you can simply not set it up.
"bringing IPs into the mix" is literally the only possible option.
https://yggdrasil-network.github.io/
Its a mesh routing network where your identity is your public key and your ipv6 address is derived from the hash of your public key.
Works perfectly
>> So go to keys-are-names.
> Elaborate, please.
Identify a service directly by its crypto key. When you configure something else to connect to it, treat the IP address as a hint, not the primary identifier for what it's talking to. Standard idiom.
... and before you tell me that that's infeasible because you'd have to modify software, go do a survey of all the code out there, and see how much of it supports IP address certificates. If you're moving around the parts of some big complex system, it's pretty much guaranteed that many of those parts are going to choke if you just blindly go and stick IP addresses in https:// URLs.
And if you're fixing the software anyway, then it's not sane to "fix" it to attach identity to something you're going to want to change all the time, like an IP address. Especially if they're global addresses (which are the only ones any Let's Encrypt or any other public CA is ever going to certify) in the IPv4 space (which is the only one any "enterprise" ever seems willing to use).
The BSD networking stack treats an IP addr as a valid hostname for hostname resolution. As such, every phone, tablet, and computer able to do TLS by hostname can do it by IP. Try it out! Self-sign an IP certificate and try it on your local net. If you put it in the trust store, it’ll validate just fine. The only barrier to adoption was CAs refusing to issue IP certificates at large.
Noot quite. DNS hostnames and IP addresses are encoded differently in X.509 certs: one is the dNSName option of the GeneralName choice type in the subjectAltName extension[1], the other is the iPAddress option. (And before you ask, tagging a stringified IP address quad as a dNSName is misissuance per the CA/Browser Forum Baseline Requirements[2] and liable to get your CA kicked from certificate stores. Ambiguous encodings are dangerous.) So some explicit support from the TLS library is indeed required. But I’m indeed not aware of many apps having problems with IP address certs.
[1] https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1....
[2] https://cabforum.org/working-groups/server/baseline-requirem...
Um, the BSD networking stack I'm familiar with doesn't include TLS or X.509 validation at all. The question isn't what you get from gethostbyname. It's what you get when you hand that to your X.509 validator.
> go do a survey of all the code out there, and see how much of it supports IP address certificates.
I've been doing that for years on onprem (~60% old "enterprise/legacy" ~40% modern stuff) and never seen anything that doesn't support it. YMMV, but if all I work with supports it, I won't complain in vain.
> those parts are going to choke if you just blindly go and stick IP addresses in https:// URLs.
I did many times, seems that legacy heavens were always kind to me in this regard.
> something you're going to want to change all the time, like an IP address
That's a personal assumption as well. If your architectures change IPs all the time, OK. Ones I worked with didn't. Always had plenty of components with IPs that didn't changed in a decade or two. Even my two previous local ISPs I had gave me "dynamic public IP" and kept them for many years. For some companies changing an ip of their main firewalls/load balancers or VPN servers is unthinkable.
Even on my last project on Public Cloud, the first thing I did was to make sure public IPs won't be dynamic (will survive recreation of services) so I don't have to deal with consequences of my corporate client endpoints and proxies flushing DNS caches randomly. (don't ask me why, but even huge companies still use proxies on a large scale. Good luck with figuring out when such proxy invalidates your DNS record).
> in the IPv4 space
IPv6 is here. Your printer and light bulb will want a cert as well.
It might be interesting for "opportunistic" DoTLS towards authdns servers, which might listen on the DoTLS port with a cert containing a SAN that matches the public IP of the authdns server. (You can do this now with authdns server hostnames, but there could be many varied names for one public authdns IP, and this kinda ties things together more-clearly and directly).
It might also he useful to hide the SNI in HTTPS requests. With the current status of ESNI/ECH you need some kind of proxy domain, but for small servers that only host a few sites, every domain may be identifiable (as opposed to, say, a generic Cloudflare certificate or a generic Azure certificate).
I'm guessing mostly hobbyists and one-off use cases where people don't care to associate a hostname to a project.
One use-case is connecting to a DoT (DNS-over-TLS) server directly rather than using a hostname. If you make a TLS connection to an IP address via OpenSSL, it will verify the IP SAN and fail if it's not there.
Not common, but there is the use case of vanity IPs. The cert for https://1.1.1.1 is signed for the IP as well as the domain name one.one.one.one
Might be nice for local/development environment work. Test HTTPS without needing to set up `my-dev-env.staging.service.com` or whatever.
But these are public certificates. Most personal computers are behind NAT, i.e. they lack a public IP address.
Yes definitely.
It helps break free of ICANN's domain name system. This enables for competitors to support https without needing self signed certs.
Static ip for self hosting at home
You can point a name at your home IP just as easily as any other IP.
The validity is just 6 days, so I'd assume it's not for long lived use cases? Or am I misunderstanding something
they're only for publicly accessible IP addresses, so they'd work the same as regular letsencrypt certs - get a new one when the old one expires.
Certificate validity has no bearing on availability. It just keeps revocation lists short.
Pretty common to have appliances without DNS entries in infra is my guess, I could def make use of this at work.
You're not going to be able to get a cert for any address that's not both (a) global, and (b) actually reachable from the Internet.
The intended use case is to forbid plain http so that you can't communicate with the computer in the next room without 3rd party permission.
What about internal IPv4 addresses? Can we have browsers ignore 192.168.x.x, 172.16.x.x and 10.x.x.x if we can't get certs for those or can we get a public wildcard for internal networks?
I don’t believe that makes sense in the context of certificates generated by a public CA. Unlike domain names, there’s not one owner of 10.10.10.10, there are millions of “owners”…
But what problem is it that you want to solve?
For local development, one can use a tool such as mkcert. For shared internal resources (e.g. within a company), it’s probably easier to use a TLS cert tied to a domain instead of using naked IP addresses.
My cameras are internally accessibly via https.
Every time I open a browser I need to click two buttons to get past the certificate error. Sure I could configure a real domain, do split DNS and get a certificate but these cameras require manual uploading a certificate. I would need to do this every three months for every camera and eventually even more frequently.
There are numerous tutorials on running your own private certificate authority (CA):
* https://smallstep.com/blog/private-acme-server/ ; https://smallstep.com/blog/build-a-tiny-ca-with-raspberry-pi...
* https://openvpn.net/community-resources/setting-up-your-own-...
* https://www.digitalocean.com/community/tutorials/how-to-set-...
Import the CA's root cert on your browsing devices and anything it issues will be trusted.
Good idea. In Firefox, one could have a separate profile for this, so that the CA cert is not imported in one’s general profile.
Another option could be to put the cameras behind a reverse proxy (e.g. Nginx or Envoy) and terminate TLS there.
and IP certs would help how? you'd have to upload every 6 days.
just run your own CA with ~infinite lifespan.
If you can somehow prove that your device can manage to have a private key that will never be extractable then you should already be able to do that with any regular CA.
The problem with certificates for internal addresses is that every single time someone tries to pull it off, it doesn't take long for someone to buy one of those devices, extract the private key, and then post about it online, requiring the key to be revoked immediately.
There is a solution to that, of course. If you trust your device, import its certificate manually so you can visit the page without errors, or if you have a lot of devices, set up a certificate authority to distribute these certificates. There are open source ACME servers that'll let you publish certificates the exact same way you'd do with Let's Encrypt, except now you can keep everything local.
Browsers shouldn't ignore private networks. your private network might just be your local router, someone else's might span across the globe.
Point your dns to an existing public server, get a cert, copy it to internal server, point your dns to 192.168.... address and copy the cert and key over.
Only problem is some routers blackhole dns responses pointing to local addresses so you need to test it
With automated certs having shorter and shorter expiration this becomes a tedious waste of time just so one can access ones cameras without having to click past the browser warnings.
Nice, another exploit for TLS. The previous ones were all about generating valid certs for a domain you don't own. This one will be for generating a cert for an IP you don't own. The blackhats will be hooting and hollering on telegram :)
IP certificates have been a thing for many years now. The only change is that Let's Encrypt now also provides them.
Does it help getting encrypted https (without self signed cert error) on my local router ? 192.168.0.1 being an example login page.
No but maybe yes: It would be impossible, and undesirable to issue certificates for local addresses. There's no way to verify local addresses because, inherently, they're local and not globally routable.
However, if a router manufacturer was so inclined, they _could_ have the device request a certificate for their public IPv4 address, given that it's not behind CG-NAT. v6 should be relatively easy since (unless you're at a cursed ISP) all v6 is generally globally routable.
Even behind CGNAT, you could probably get away with DNS here. If you provide your customers with customeraccount.manufacturerrouters.com, you can then use DNS validation to get a valid certificate for *.customeraccount.manufacturerrouters.com. Put a record in there that points to the local router IP (I.E. settings.customeraccount.manufacturerrouters.com) and you can get HTTPS logins on your local network, even with local IP addresses if the CAB still allows that.
It's not exactly user friendly, but it'll work.
Personally, I have a private CA that I use. My home router has a domain name pointing towards it and has been loaded up with my private certificate. I get the certificate error once a year when the thing expires but in the mean time I can access my router securely.
No and it shouldn’t. You can just run a proxy with a real domain and a real cert and then use dns rewrites to point that domain to a local host
For example you can use nginx manager if you want a ui and adguard for dns. Set your router to use adguard as the exclusive dns. Add a rewrite rule for your domain to point to the proxy. Register the domain and get a real cert. problem solved
All of my local services use https
No, they won't issue a certificate for a private IP address because you don't have exclusive control over it (i.e., the same IP address would point to a different machine on someone else's network).
No, on the contrary. You can't get a valid certificate for non-global IP, but you can already get a certificate for a domain name and point it to 192.168.0.1.
You have to possess the IP.
no but you can do something closely related:
- get a domain name (foo.com) and get certificates for *.foo.com
- run a DNS resolver that maps a.b.c.d.foo.com (or a-b-c-d.foo.com) to the corresponding private IP a.b.c.d
- install the foo.com certificate on that private IP's device
then you can connect to devices in your local network via IP by using https ://192-18-1-1.foo.com
Since you need to install the certificate in step 3 above, this works better with long-lived certificates, of course, but aotomation helps there
Cloudflare DNS (probably others as well) allows you to enter private IPs for subdomains, so you don't have to run your own DNS. There's no AXFR enabled, so no issues with privacy unless you have someone really determined to dictionary-attack your subdomains.
I considered doing that for a project once.
Then I realised that when my internet was down, 192-18-1-1.foo.com wouldn't resolve. And when my internet is down is exactly when I want to access my router's admin page.
I decided simply using unencrypted HTTP is a much better choice.
> Then I realised that when my internet was down, 192-18-1-1.foo.com wouldn't resolve.
Just add a local DNS entry on your local DNS server (likely your router).
You don't even need to, mDNS has been enabled by default by most devices for ages now. You'll have to look up what the name is your manufacturer chose (if you use Windows, you van usually hit the network explorer tab and it'll be right in there, don't know about other OSes). It'll even work if IPv4 is broken (if you ran out of DHCP leases or whatever) because it almost always natively runs on IPv6 too.
I could start running my own DNS server, and start manually curating all the important entries in it, sure.
Or I could just use HTTP, or a self-signed certificate. If an attacker intercepts traffic on twenty feet of ethernet cable in my home's walls, I've probably got bigger problems than protecting my router admin password.
Shame that IP address certificates can’t be done via the DNS challenge, but I completely understand why.
Interesting, there is no subject in the example cert shown.
Is this because the certificate was requested for the IP, and other DNS entries were part of the SAN?
We (Let's Encrypt) are getting rid of subject common names and moving to just using subject alternative names.
This change has been made in short-lived (6 day) certificate profiles. It has not been made for the "classic" profile (90 day).
Have you found much trouble with clients that can't cope without CN? Is this one of those situations where anything that can't cope is also hopeless for other reasons (e.g. can't speak TLS 1.2, doesn't understand IPv6, that sort of thing) and so you can tell people you're not their biggest problem ?
It’d surely be something like that. CN has been deprecated and SAN support has been required for 25 years at this point[0].
[0] https://datatracker.ietf.org/doc/html/rfc2818#section-3.1
Time for me to dust off CVE-2010-3170 again? :-)
I guess a bunch of "roll your own X.509 validation"-logic will have that bug, but to exploit it you need a misbehaving CA to issue you such a cert (i.e. low likelihood)
This seems to be for public IP addresses, not private RFC1918 ipv4 range addresses.
The only challenges possible are HTTP and TLS-ALPN, not DNS, so the "proof" that you own the IP is that LetsEncrypt can contact it?
Yes, which is the same way control of a domain name is typically checked; DNS is only used in a minority of cases as it can't be as turnkey.
Having DNS available wouldn't be any more "proof". The person applying gets to choose which form of proof will be provided, so adding more options can only ever make it easier to "prove" things.
I don't happen to know if it's actively in use, or whether any of the technical implementation details were formally standardized, but one obvious thing goes like this:
1. Write a DNS record for CAA the Certificate Authority Authorization for your names
2. In the CAA record, say that you forbid anybody except your chosen CA to issue. Competent CAs will obey this instruction (obeying it is mandated by the root trust stores, there are bugs of course but on the whole compliance is very good and is separate from their implementation of specific validation methods)
3. Further, indicate, either in this record or by agreement with your chosen CA, that they must use DNS proof of control. This might be something very nerdy like indicating a specific OID for the method in the CAA record or it might be a Memorandum of Understanding somebody signed and then they went out for a nice lunch.
Interesting. I wonder if XMPP federation would work with such a certificate.
Are there public XMPP servers using just the IP for the host? Never heard of this, I could see this being the case internally.
Is this for ipv4 and ipv6?
It will work for both.
So does anybody have a pointer to the official justification for this insanity?
The announcement is https://letsencrypt.org/2025/01/16/6-day-and-ip-certs/. I don't think it's more complicated than: there exist services that for one reason or another don't have a domain name and are instead accessible by a public static IP address, and they need TLS certificates for security, and other CAs offer this, so Let's Encrypt should too. Is there any specific reason why they shouldn't?
Hmm. Absolutely no explanation of why there's a need. Given only that announcement, I'd have to assume that the reason is "because we can".
So the first reason not to do it is that you never want to change software without a good reason. And none of the use cases anybody's named here so far hold water. They're all either skill issues already well addressed by existing systems, or fundamental misunderstandings that don't actually work.
Changing basic assumptions about naming is an extra bad idea with oak leaf clusters, because it pretty much always opens up security holes. I can't point to the specific software where somebody's made a load-bearing security assumption about IP address certificates not being available (more likely a pair of assumptions "Users will know about this" and "This can't happen/I forgot about this")... but I'll find out about it when it breaks.
Furthermore, if IP certificates get into wide use (and Let's Encrypt is definitely big enough to drive that), then basically every single validator has to have a code path for IP SANs. Saying "you don't have to use it" is just as much nonsense as saying "you don't have to use IP". Every X.509 library ends up with a code path for IP SANs, and it effectively can't even be profiled out. Every library is that much bigger and that much more complicated and needs that much more maintenance and testing. It's a big externalized cost. It would better to change the RFCs to deprecate IP SANs; they never should have been standardized to begin with.
It also encourages a whole bunch of bad practices that make networks brittle and unmaintainable. You should almost never see an IP address outside of a DNS zone file (or some other name resolution protocol). You can argue that people shouldn't do stupid things like hardwiring IP addresses even if they're given the tools... but that's no consolation to the third parties downstream of those stupid decisions.
... and it doesn't even work for all IP addresses, because IP addresses aren't global names. So don't forget to special-case the locally administered space in every single piece of code that touches an X.509 certificate.
TLS certificates for IP addresses are already a thing that exists. You can, for instance, go to https://1.1.1.1 in your browser right now (it used to actually serve the HTML from there but now it's a redirect). If that doesn't work in a given TLS client, this will be treated as a bug in that client, and rightly so. The genie is out of the bottle; nobody is going to remove support for things that work today just because it'd be slightly cleaner. So TLS clients are already paying the maintainability costs of supporting IP address certificates; this isn't a new change.
I'm not sure why private IP addresses would need to be treated differently other than by the software that issues certs for publicly trusted CAs (which is highly specialized and can handle the few extra lines of code, it's not a big cost for the whole ecosystem). Private CAs can and do issue certs for private IP addresses.
Also, how would DoH or DoT work without this?
> TLS certificates for IP addresses are already a thing that exists.
Still not wide use. It's when it gets into wide use that you end up having to include it in everything.
For now, it's a parlor trick, and it's a parlor trick that shouldn't work.
> nobody is going to remove support for things that work today just because it'd be slightly cleaner.
Work, but shouldn't and aren't actually used except by crazy people.
> If that doesn't work in a given TLS client, this will be treated as a bug in that client, and rightly so.
I've tried to use TLS on microcontrollers that barely had the memory to parse X.509 at all. Including stuff just because you can doesn't make that better.
... and I'm not going to go check the relevant RFCs, but I very much doubt that IP SANs are listed as a MUST. If I'm wrong, well, that's still a bug in the RFCs.
> Also, how would DoH or DoT work without this?
Hardwired keys for your trusted resolvers. Given that the whole CA infrastructure long ago gave up on doing any really robust verification of who was asking for a cert, making your DNS dependent on X.509 is a bad idea anyway. But if you really want to do it even though it's a bad idea, you can also bootstrap via the local DNS resolver and then connect to your DoH/DoT server using a domain name.
DoH, of course, is a horrible idea in itself, but that's another can of worms.
[dead]
It seems to me they could just as easily issue subdomains and certs for said IPs and make the whole thing infinitely safer.
I could see the opposite argument: domain names who knows, someone could steal it or hack the registrar, registrar could be evil, DNS servers could be untrusted and/or evil or MITM'd... connecting to an IP you're engineering out entire classes of weaknesses in the scheme.
Sure, someone could steal google.com I guess
https://github.com/cabforum/servercert/pull/579/commits
</s?
I'm sorry, but how is "Require validation of DNSSEC (when present) for CAA and DCV Lookups" related to issuing X.509 certs that include IP address SANs? I don't see any connection, and I didn't spot anything about it on a quick skim of the comments.
Anything from people who are afraid of increasingly onerous DNS requirements to breakage because they can't fix their parent domains DNSSEC misconfiguration. It seems like an interesting timing coincide to me so I wonder if there's some (ir)rational explanation. (Implementing a new SAN that must inherently have the gap you are finally addressing is not a bit funny to you?)
[dead]
I've personally never felt comfortable using regexes to solve production problems. The certificate code referenced here shows why:
https://github.com/mozilla-firefox/firefox/blob/d5979c2a5c2e...
Yikes.
I think that's not doing anything security-critical, it's just formatting an IPv6 address for display in the certificate-viewer UI.
All that regex does is split an IPv6 address into groups of 4 digits, joins them with ":", and collapses any sequence of ":0000:" to "::". I don't see anything problematic with it.
> and collapses any sequence of ":0000:" to "::"
Which is an error. Any ip like 2001:0000:0000::1 is going to be incorrect. It willingly produces errors. Whoever wrote this didn't even spend a few seconds thinking about the structure of IPv6 addresses.
> I don't see anything problematic with it.
Other than it being completely wrong and requiring a regex to be compiled for an amount of work that's certainly less than the compilation itself.
It only operates on a 32 digit IPv6 address so it won't already be abbreviated. My phrasing was inexact. It replaces only the first sequence of any number of ":0000:" to "::".
> Any ip like 2001:0000:0000::1 is going to be incorrect.
This is neither a possible input nor a possible output of that code.
That example doesn't work, but an IPv6 address like: 3fff:0020::
Would be in the IP SAN as 3fff0020000000000000000000000000, which this code expands:
Which has one too many parts and doesn't parse as an IPv6 address. But like mentioned this is just presentation code. I don't want to waste time if this isn't actually a bug, but maybe someone on the LetsEncrypt trial could actually make a cert to see if IP addresses formatted like that are a problem in reality...That one does look like a bug. I stand corrected.
> Any ip like 2001:0000:0000::1 is going to be incorrect
How so?
> Other than it being completely wrong and requiring a regex to be compiled for an amount of work that's certainly less than the compilation itself.
It's not. And the sequence you describe is not even parsed because colons are not part of the IPv6 extension of the SAN. PLease educate yourself before spilling such drivel.
Unless you see a glaring issue I don't: I think you are getting the causality wrong there. You "Yikes" because of your discomfort and lack of practice with regexes.
> You "Yikes" because of your discomfort and lack of practice with regexes.
That's exceptionally presumptions to the point of being snotty.
> I think you are getting the causality wrong there.
Where did I imply causality? This was simply an occasion to look at the code. This is bad code. I would not pass this. What's your _justification_ for using a regex here?
> Where did I imply causality?
> > The certificate code referenced here shows why
So what's the implication here, then?
> This is bad code.
Without justifying further I think we're on equal footing on the snottiness here (:
What's bad? Why not use regex here? It's not like they're using it to parse user-controlled HTML. Simple string transormations like this is a great use-case where the manual character iteration easily becomes inefficient and messy. And you may introduce bugs in the process (unicode length bugs are common).
Do you also avoid grep and sed without the -F flag in shell?
This is incredibly dumb. The three way handshake and initial key exchange is your certificate.
That would be fine if browsers didn't throw up giant warning signs when using self-signed certificates.
Usually you can just import the leaf self signed cert as a CA in your OS trust store and the problem goes away (assuming it has an IP SAN). Slightly tedious but you can issue the certs with long validity
That sounds like a defect in the browser design.
Or maybe it's because you actually want an identity to verify (which an IP address is not.)
And this protects you from a hostile network how?
How does the certificate? If you already have to do the TLS handshake it doesn't change anything.
A verified certificate lets you know you didn't handshake with an attacker in the middle.
Let me rephrase that: How is the CA supposed to know they didn't handshake with an attacker? All they have is the IP, there's no identity to check like with DNS.
Boy that doesn't sound good.
I expect SAN in this case means Subject Alternative Name, not Storage Area Network.
Sigh... I wish people would use their words before trotting out possibly-ambiguous (or obscure) acronyms. It would help avoid confusion among readers who don't live and breathe the topic on the writer's mind.
There’s only one, and not really obscure, interpretation of this acronym in a technical forum post announcement from a TLS certificate authority, the context was sufficient.
If you don't know how to interpret "SAN" in a blog post from a TLS certificate issuer, I don't think you're the target audience for this post.
Lots of people on HN are not the target audience for any given post, yet are still interested.
(And my point applies to all writing and speaking, not just this post.)
If it was a blog post or announcement, we’d have surely included more context, and not a forum post really intended for limited distribution.
You just used HN without expanding that acronym! :)
OK, but how hard is a link to Wikipedia?
please expand the abbreviation "OK"
It's standard academic writing practice to introduce the full acronym on first usage in any given text.
Way more people should be familiar with the concept since it's very useful and ensures clear communications.