farblog

by Malcolm Rowe

HTTPS

Inspired by Google’s recent decision to boost the ranking of HTTPS sites, and because it’s something I’ve been meaning to do for a while (and also because it’s generally the right thing to do), I’ve just moved this blog to serve via HTTPS.

I pretty much just walked through this set of instructions from Eric Mill, using the SSL configuration from Mozilla’s OpSec team (seriously, don’t try to do this bit yourself: the folks at Mozilla know what they’re doing). All told, it only took a couple of hours.

Like Eric, I also got my free certificate from StartSSL; they seem reasonable enough at the moment, and I can always change later if I feel like it.

Other than needing to switch to a protocol-relative URL for Google Web Fonts, the site worked first time (though it helps that it’s fairly simple: all the odd stuff got left behind when I split the serving of this blog to a Google Compute Engine instance).

However, unlike Tim, I didn’t keep the HTTP version of the site around: all http:// URLs now result in a 301 to the HTTPS equivalent1. I haven’t yet enabled HSTS to pin the site to HTTPS, but I’ll probably do so in a week or so, once I’ve checked to see if any problems turn up.

I’m also not entirely concerned about backward-compatibility with old clients (I used the Non-Backward Compatible Ciphersuite list, for example). I was originally planning to only enable TLS 1.2, but it turns out that I do still care about some older clients (no, not Windows XP): GoogleBot and pre-KitKat versions of Android (presumably the Android browser rather than Chrome-on-Android), which only support TLS 1.02. In the end, I only ended up disabling SSL2 and SSL3.

Once I’d tested the site, the only thing I needed to do was to register the HTTPS URL in Google Webmaster Tools, and update a few incoming redirects to avoid long redirect chains.

I also found the following sites useful:

In summary: for many sites, enabling HTTPS is pretty trivial. If you’re making a new site, consider making it HTTPS-only.


  1. Except for robots.txt, which serves the content directly. I’m not sure if that’s actually important, but it seemed like robots might not want to follow redirects to fetch robots.txt, even if they would for the other content. 

  2. In addition, the version of curl I have on my desktop only supports TLS 1.1, so I would have at least wanted to enable that.