I'm really pleased to see this initiative and I've used the private beta with letsencrypt-nosudo[0] to issue a certificate, but after successfully getting a certificate my site failed the SSL Labs test[1] with an 'unknown CA' error, even though I used the newer one that should have been trusted. It was probably down to user error and the additional complexity of denying sudo privileges for the set up script, but it took much more work than I expected. Then I noticed the certs will only ever be good for 90 days, and I'll have to do this multiple times a year!
Bottom line for me is that with DV certs so inexpensive and simple to get, I'd rather pay a few dollars a year for normal commercial certificates. I can do things like use CloudFront at a custom subdomain with HTTPS without needing to point DNS somewhere every few months to get a cert reissued.
I simply can't justify the extra work involved to get 'free' certificates, and I'm happy to continue buying regular DV certs. Maybe these are temporary limitations and if so I will definitely try again in the future.
I think you really have to understand that at its heart, Let's Encrypt is not about free certs as much as it is about automatic certs. If you just want a cert, definitely use an established provider. But a year from know, LE will be making this a "set and forget" thing, which is how it should be. LE is NOT a painless way to get certs for legacy infrastructure. I found this out by using it for an elastic beanstalk hosted site. I just wrote about it at https://go-to-hellman.blogspot.com/2015/11/using-lets-encryp...
You nailed it. It's important that our certs be free because we can't automate a billing interaction. If we had to charge then sysadmins couldn't just type a command and be on their way. Automated renewal could fail because billing info was out of date. This stuff has to just work, reliably, if we're going to expect the entire Web to use TLS.
On the plus side, Amazon could choose to automate IAM SSL storage and renewal through Let's Encrypt so it would be fully automatic. Might take a bit until they do that though...
I actually love the idea of 90 day (or less) certificates! Once you automate the process of replacing your certificate (which let's encrypt will greatly help with), it won't matter how short the period is. Also, if a key gets compromised, it'll be valid for a shorter time. Give https://letsencrypt.org/2015/11/09/why-90-days.html a read! If you want to get more in-depth about certificate revocation, http://news.netcraft.com/archives/2013/05/13/how-certificate... is also a great/depressing read.
I would assume not, given that https://letsencrypt.org/2015/11/09/why-90-days.html cites "According to Firefox Telemetry, 29% of TLS transactions use ninety-day certificates. That’s more than any other lifetime."
Hrm. Good point. At some point I remember some SEO guru or other claiming that long-duration domain name registrations were good for SEO, but looking at it now, it seems as though that was a correlation !== causation, as MS, Goog, et al register their domain names for decades at a time, and also tend to have high search rankings.
As for the rest, chalk it up to my over-active imagination, compounded with that bad knowledge. SSL is an SEO boost (according to a random Googling), and if domain name expiration was a factor, it made sense to me that SSL expiry would factor too.
If someone compromised the key they also compromised the system used to automatically generate more keys, so a short expiration is not as helpful as it looks.
It's even worse than that:
A smart attacker will copy the method used to generate keys, and leave the server. Then they can keep generating keys and you will probably never notice.
I feel that automation is a mistake, something security sensitive like this should be on a completely different machine.
Generating the cert involves proving that you own the domain. An attacker can't copy that away (unless they've stolen the domain entirely from you, in which case the SSL keys are not your primary issue).
I'm not in the beta, and thus haven't been able to play with it yet. But, I don't believe there'd be anything prohibiting you from generating the certs on a separate machine. In fact, I'd imagine that's what you'd want to do (if you have more than one server) rather than generating a separate certificate for every web server or load balancer.
That's correct. I've recently deployed LE on a side project. The official letsencrypt client supports a mode called webroot, where you basically tell the client where your webroot is located on your file system, and it will place a file in .well-known/acme-challenge/<random> to confirm ownership. I use nginx in front of multiple domains with different backends (which do not all have the platform equivalent of a webroot), so I simply added a location directive for .well-known to all vhosts pointing to the same directory which I then pass to letsencrypt.
It would be trivial to move this to a separate machine by using a reverse proxy for .well-known instead of serving it directly from your load balancer's filesystem. The rest is just scp'ing your certificates and keys to your load balancers (or using your configuration management software of choice to achieve the same). With ACME being an open protocol it's quite likely that someone will end up writing a client specifically for this use-case, making it even easier.
Howdy, I wrote letsencrypt-nosudo. Sorry that the experience was so painful! Mind filing a detailed issue on the repo so I can fix it and make the user experience better?
Oh I think it's simple enough and thanks for the tool, but I'm comparing it to the workflow I currently use which has fewer steps (generate CSR, paste in form on issuer's website, click link to validate domain). Headed out now but I'll try to provide some better feedback later.
Yeah, unfortunately, the ACME protocol requires registering accounts and making requests using public key signatures, so it's not as user friendly as email confirmations (despite being way more secure and automatable).
I am actually incredibly happy with the process. nginx requires these concatenated certificates and I always get the order / file formats / trailing spaces wrong. With LE I got it up & running in < 3 minutes, including manually updating the config file.
There's definitely some rough edges on the tooling that will make this less painful. You likely used it with development endpoints, which only give certificates signed by untrusted CA (happy hacker fake CA or something like that). However, the point[1] of short lifetime of the certificates is to incentivize automating it. I'm highly hopeful that in a short while, having an HTTPS certificate is a matter of apt-get install.
Possibly user error still, but I use HTTPS across all my personal sites and they all rate well on third-party tests so I'm not a total noob (I hope...).
Note that you also have to do this step for other CAs, it's just that browsers often cache intermediates that they've seen before so sites may mysteriously work for most or almost all users but fail for others if the intermediates are missing. This configuration problem is extremely common and often confusing for people to diagnose. It might be more obvious with Let's Encrypt's intermediate because not as many browsers have that cached yet, but the configuration issue is technically the same regardless of what CA you're using. Sending chain files is mandatory if you want to be compatible with all user-agents that accept the root that you're chained to.
Given the scope of what they are trying to accomplish and the fact that I've never worked on a project which was delivered on time and with all features complete, I'm pretty happy that the delay isn't longer! :)
I am still unable to run the tool to get the certificates on Windows OS. I know that there are some development in progress, but still far to be finished. Windows OS is running around 30% of the web servers. Please don't neglect it.
I'd be interested in whether my script[0] works on Windows, aside from the obvious fact most Windows servers don't have Ruby.
You could also just put a gets on line 43 to pause it before the verify, and copy the generated verify files onto said Windows server from any Linux box anywhere.
I am beginning to wonder how much effect Let's Encrypt will really have on wide TLS deployment. A very large portion of the web is stuck at shared hosting services, such as Go Daddy, Lunarpages, et al. These services generally charge for TLS hosting, and due to the 90-day issuance on Let's Encrypt certificates it seems somewhat infeasible to use their certificates on shared hosts which offer very limited (if any) shell access.
That's a good point, domain registrars like GoDaddy generally don't make much on domains alone and focus on selling additional services like hosting packets (and SSL), where they probably make most of their money.
However, most browser vendors are already making plans to phase out HTTP without TLS by only providing new features/APIs to HTTPS sites (and eventually by displaying http:// as insecure in the UI).
I think in the end this will force shared hosting providers to include domain-validated certificates (from e.g. letsencrypt) in their base packages for free. Instead, they would probably push OV and EV certs to make up for any revenue loss.
Side question, does anyone actually enjoy running a VPS? Between managing the sites on it, you have to maintain the VPS, keep it up to date, its prone to security bugs and flaws, etc. Am I missing something here? I remember setting up multiple VPSes on Linode / DO and it was always a painful process of installing the OS, installing the whole stack, configuring everything, setting up users / roles, firewalls, etc.
On top of that, whereas on a shared host i click a button and host a second domain, with a VPS I have to SSH in and manually edit server files.
But everyone always recommends running a VPS so I can't help but imagine I've either missed some magical tool that makes running a VPS a snap or it's just not a realistic solution for most people.
I don't think it's a question of "enjoy" versus "need". If you just doing some static hosting with perhaps PHP, sure, go with shared hosting. The second you need to run your own Java server, Go server, etc. you're in VPS territory.
On a basic level, shell scripts to do the basic config work in a pinch. But if you can learn the basics of Ansible, setting up a new VM can take just a few minutes and not be painful at all. It's quicker for me to add a new VM and apply a few Ansible roles I've written for a new client site/app than to log into some shared hosting provider and click through their UI to do the same.
I much prefer using NearlyFreeSpeech. I like being able to outsource the server management to them. Since my sites are static and largely cached by Cloudflare, I think I've paid them less than $10 for the last year's worth of service.
I know DO for instance has ['projects'](https://www.digitalocean.com/community/projects) – "apps, wrappers, and integrations
created by our developer community using the DigitalOcean API" – so you don't necessarily have to setup everything yourself.
But if you're configuring multiple nodes that are similar or the same you should definitely be using images. Setup one node, create an image of it, and then create new nodes from the image.
Let's Encrypt is brilliant and the web needs it but the tooling isn't quite there.
For example with haproxy you need the entire chain and private key together, which I have to do manually. As the API is open it's doable - I may even do something myself.
I can't wait until I have something that somebody else or I has written that, once the API is complete, you can stick in a cron job and does the concatenation and reloads haproxy/nginx/whatever. Until then the whole thing is beta.
It's not even the monetary aspect - i'd happily pay for certs, but LE is so on the way to making it a devop as opposed to a finance/ops thing that it needs to be encouraged. Donation incoming...
I don't think cost is what's keeping people on shared hosting versus VPS. It's that there is a whole wide world of people just doing a little static hosting. They don't want, need, or know how to use a VPS.
In fact, I think most people on shared hosting are not hosting static sites, but rather sites based on one or more of the hundreds of PHP application frameworks out there (WordPress, PHPBB, Joomla, Drupal, Magento, MediaWiki, etc, etc.)
Back when I used to do frequent freelancing for the type of client who used shared hosting, I don't think I met a single one who hosted static sites, even though for many of them static would have been more appropriate and far more secure and performant.
There are reasonable offerings for between 10 and 20 USD/year (from companies that are even likely to be around long enough for you to use the full year - there are cheaper still from the other sort of company) some with enough CPU+memory+disk space to be useful for more than static hosting too. Look at places like lowendtalk.com for such offers when they turn up.
Many hosting providers are choosing to integrate LE into their products directly so even if you don't get shell access you will still be able to get a cert. As far as I know both Plesk and cPanel are also working on official plugins for their software.
I would imagine that shared hosting providers will just request the certificate for you in the future. Remember: This is about domain validated certs, so there's zero difference whether you request the certificate or whether your hoster does it for you.
Is there finally a way to renew the certificate without taking down the web server listening on :443? This was the major thing missing from being able to deploy it in production.
Caddy (currently in beta) will issue and renew SSL certificates automatically with no downtime (on Linux; Windows has very brief downtime during restarts).
I can confirm that Caddy + Let's Encrypt is the most seamless and awesome way to run TLS. I did this last night for one of my LE beta whitelisted domains and it took MAYBE 4 minutes. Caddy did all the work. Kudos to the Caddy team for such a great admin experience.
Hopefully some kind of OpenBSD support comes along. Pretty much letsencrypt-nosudo until then. The standard letsencrypt program doesn't appear to work on Debian 7 due to outdated openssl and what not.
The beta was good timing for me. I installed an LE cert on Ubuntu running Apache and it's working fine. The instructions were a bit unclear about whether the "auto" option works yet for that setup (it doesn't). Also, I had an issue with permissions on the cert directory - I use a group for my server permissions (instead of running at root), so had to add that group to the cert directory. But the process is still better than what I've experienced implementing a comodo cert.
What is the value of checking Google's Safe Browsing API before issuing a certificate when the browser can/should use the same Safe Browsing API to block the phishing website? Move the policy to the user agent.
If you look at Chrome's change to https indicators, they give these sites with auto issued certs the lock so users will interpret it as "secure". Seems easy to create fraud sites and give them a legitimate site look.
Bottom line for me is that with DV certs so inexpensive and simple to get, I'd rather pay a few dollars a year for normal commercial certificates. I can do things like use CloudFront at a custom subdomain with HTTPS without needing to point DNS somewhere every few months to get a cert reissued.
I simply can't justify the extra work involved to get 'free' certificates, and I'm happy to continue buying regular DV certs. Maybe these are temporary limitations and if so I will definitely try again in the future.
[0]https://github.com/diafygi/letsencrypt-nosudo [1]https://www.ssllabs.com/ssltest/index.html