Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
HTTPS provides more than just privacy (certsimple.com)
211 points by nailer on Jan 26, 2016 | hide | past | favorite | 83 comments


Especially point 4 («Not having your content modified by carriers») is getting more and more important these days. Whenever I hear people saying that they don't need https as they are providing a strictly read-only experience I always wonder what they will say as their ads get replaced with other ads by carriers and other "transparent" proxies in between.

And if your site is only moderately complex and using some of the more advanced HTTP features (pipelining, websockets or plain long polling), you might be better off using SSL too because by now the likelyhood of any of this working unhindered by various "security" and "traffic optimization" tools is approaching zero very quickly.

On the other hand, I wonder how long it's going to take before carriers and providers in general mandate the installation of a root cert "for your security" in order to be able to continue messing with the traffic to insert ads or "increase security".


To reiterate your point. I've used free/complimentary wifi that overlays ads (specifically: megabus wifi). It is awful and even made some sites difficult to use. My solution at the time was to use a VPN, but it would have been nice if I didn't have to resort to that.


I have that problem with Comcast and their data cap notifications being injected into HTTP web sites.

Yesterday I called them and agreed to pay their $35 extortion fee to get "unlimited" data along with being removed from their proxy system. :/


wouldn't it be cheaper to get a $5/month vpn/vps?


That doesn't get you out of Comcast's cap system.


>> the likelyhood of any of this working unhindered by various "security" and "traffic optimization" tools is approaching zero very quickly.

Except that anti-virus software is now man-in-the-middling HTTPS, and it is considered "normal". All the security of HTTPS being thrown away for the sake of scanning all transfers.


If it's happening locally (likely) and if it's handling certificate validation correctly (way less likely), then there's no difference connection-security-wise between the AV tool running or not running.

Of course, AV tool will still mess with your connections and break them in interesting ways, but that has been true since the beginning of AV and in case of local AV you can at least ask to user to try and turn it off. If it works then, you can blame the AV software.

If you have a proxy server which does, for example, duplicate some POST requests, accountability is much harder und you will probably have to resort to adding workarounds for the issue (this has happened to me).


Another reason not to use anti-virus scanners.


What do we do about censorship? When countries block all foreign HTTPS?

It really feels like websites should know what transport they're operating on (secure vs unsecure) and present what information they can reasonably with unsecure. Though turning every website into a "modal" thing is also a pretty high order.


If they are that tyrannical that they would block all foreign HTTPS then there is nothing we can really do for them.


Other than helping them access Tor though unlisted bridge nodes.


It would also make the censors' job easier.


Author here! Surprising thing when researching this was how fast some of the stuff talked about a year ago is already upon us in the upcoming browser releases:

- If you're running Chrome Canary, try visiting http://html5demos.com/geo (which hasn't yet been updated for HTTPS yet). Geolocation fails, consistently over HTTP

- Firefox nightlies has already started warning for sites that allow POST over HTTP: https://www.fxsitecompat.com/en-CA/docs/2015/non-https-sites... (for type=password only, see dsp1234's note below)

PS. since this article has gotten so much attention: does anyone on HN know of a newer app than Driftnet or Etherpeg for reassembling images from HTTP traffic? I wanted something newer (that people could install themselves on a Mac) but couldn't find anything.


Here is the Chrome bug report to remove geolocation support for insecure origins (in Chrome 50):

https://codereview.chromium.org/1530403002/

And here is the Firefox feature request, filed in 2014:

https://bugzilla.mozilla.org/show_bug.cgi?id=1072859


The geolocation demo works on Chrome for Android though.


Firefox nightlies has already started warning for sites that allow POST over HTTP

only for pages that contain a password input


Maybe a nitpick, but I think this section could be clearer:

"Except HTTPS traffic won't show up on Malory's screen: things encrypted with a website's public key can only be decrypted with the website's private key. Because the website's private key is (hopefully) only available to the website, Malory can't decrypt the traffic."

I know it's just an overview, but this explanation seems a bit confusing. I mean, it's true that the trust is anchored on the private key but that's only one part of it.

My (simple) understanding of TLS is that basically, the public/private keys are used to secure the shared secrets used to derive the symmetric session keys OR to authenticate something like a DH key exchange that's used to derive the session keys. In the first case, the client would generated a shared secret, encrypt with the public key, and then send to the server. In the second case, the authenticated DH key exchange allows both client/server to arrive at the shared secrets.

In either case, it's these symmetric session keys which are used to encrypt/decrypt the traffic.

My understanding is that even if you capture the encrypted traffic and then later on, you obtain the private key, you wouldn't be able to decrypt the traffic if the parties used something like a DH key exchange. (This would be (perfect) forward secrecy?) Of course if you have the private key and no one knows yet, then you could impersonate the server and MiTM future connections, I think.

Someone with better knowledge could explain this better.


You're comments are correct, but I don't think replacing the 1 brief paragraph in TFA with your 3 would improve the article.


Here's another reason:

If you're providing iframe or Javascript embeds for people to paste on other websites, (like YouTube and Vimeo do for their videos), you better be using HTTPS. If you don't, your embeds won't work on HTTPS sites, as modern browsers block mixed active content.


If you're concerned about the overhead of an SSL handshake, another option is simply using '//embed.example.com' so it automatically chooses http/https based on the main site.


Who is rightfully concerned about the overhead of an SSL handshake in 2016?


While the point of the article is not incorrect, I would say the title is

#0 and #1 are intrinsically part of HTTPS

#2 and #3 are just because Google and browser vendors simply limit these things to HTTPS. It is not like it can only come with HTTPS.

Finally, #4 is a side effect of #0.

So the only thing really provided BY HTTPS is encryption and authenticity.


I see your point - many of these are side effects of encryption - but the reason I (the author) wanted to focus on some of these things, particularly 2, 3 and 4, is that people don't realise there may be problems they encounter from not using HTTPS, even if privacy wasn't their main concern.

E.g, you run an online video company, and (for some silly reason) you don't care about privacy. You still don't want T-Mobile turning your great HD content into a low resolution blocky mess.

Edit: actually, thinking about it, you're right. I've updated the article to use the word 'privacy' to better separate it from the other aspects on encryption. Thanks datalist!


> #2 and #3 are just because Google and browser vendors simply limit these things to HTTPS. It is not like it can only come with HTTPS.

True for "signaling value", but for powerful browser features that the user grants or denies on a site-by-site basis, there's a critical reason to only allow those for HTTPS. If you allow those features for HTTP, you allow them for anyone who can MITM your connection and spoof those sites. Any random ISP, open wifi, or other entity could trivially MITM a popular site via HTTP and obtain privileges it should not have.


> Any random ISP, open wifi, or other entity could trivially MITM a popular site via HTTP and obtain privileges...

If a user is MITM'ed, it doesn't matter if some features are disabled. The attackers can do anything they damn well please, like directing the user to a compromised HTTPS site to access those features, or to a drive by download to do worse things.


Not true at all. A user's grant of permission applies to a particular site, and HTTPS ensures that nobody can spoof that site. If the user visits an HTTP page and an MITM attack redirects that to some HTTPS page elsewhere, that page will still have to ask the user for permission. But if HTTP pages can ask for permission to access those same features, then an MITM attacker can redirect to a spoofed version of a popular HTTP site that asks for that permission.

Concrete example: geolocation permission. You visit some popular mapping site that uses HTTP, and grant it persistent permission to use your location. Later, you browse something else via HTTP over a Tor connection. In that browsing session, if you saw any permission prompt for geolocation, you'd reject it. However, a malicious exit node could MITM you and use that to determine your location by pretending to be the mapping site, without a permission prompt.

Ditto for getUserMedia (webcam/audio), fullscreen, and any other permission a user can grant or deny on a site-by-site basis.

So, browsers want to drop the possibility to use those features from HTTP, not just to push sites to HTTPS but because the entire concept of granting permission only to a specific site doesn't doesn't work with HTTP.


When the user grants permission to a site, how does that permission work? By domain or resolved IP address? If it's by domain and there's no HPKP for it (and IP not cached), the MITM can send a malicious IP through DNS for any domain. The server at malicious IP can fake being whatever domain and the attack will work.


By domain. However, with a requirement for HTTPS, it doesn't matter if MITM hijacks that domain and points it at another IP, because the server on that IP can't present a valid certificate for that domain, so it won't load at all.

Pinning makes that even more secure, but even without pinning, requiring HTTPS raises the bar from a simple MITM to obtaining a fraudulent certificate from a trusted certificate authority (risking a blacklist of that CA in all browsers).


I think that instead of HPKP, the parent poster meant to refer to HSTS.

If the attacker MITMs an HTTPS request, then they have to provide a forged certificate (whether HSTS is set or not).

If the attacker MITMs an HTTP request which would have normally been redirected (e.g. typing "facebook.com" into the URL bar), then HSTS is required. Without it, no forged certificate required.

HTTPS and HSTS should have just been the baseline the web shipped with, but the web shipped before people thought about these things, and arguably before encryption was stable/secure/fast enough to be practical for the whole web.

I'm very happy that the federal HTTPS-policy includes an HSTS requirement for federal agencies: https://https.cio.gov/


Very good point.

However, even if the attacker MITMs an HTTP request that would have been redirected, for a site that didn't use HSTS, they won't get cookies marked "secure", and they won't get permissions that require HTTPS. So the browser mechanism limiting those permissions to HTTPS still works even in that case, though the user and site both have other problems.


They won't get secure cookies [for the site they're attacking], though they could plausibly redirect the user to a similar-looking HTTPS domain under their control and get permissions requiring a secure origin.


True, but they won't get permissions that the user grants on a site-by-site basis, which also means the attack can't silently happen in the background.


Yeah exactly my thoughts.

The cynic in me is thinking "hey, funny that a company selling SSL Certs wrote an article about how great SSL Certs are which doesn't do much more than state side-benefits to one primary benefit of having SSL"

HTTPS gets you privacy, tamper-resistance, and authenticity for the end-user.

This is the one that I often forget: just because something is encrypted doesn't mean it can't be meaningfully manipulated without knowledge of cleartext.


>#2 and #3 are just because Google and browser vendors simply limit these things to HTTPS

Seems like a pretty real and legitimate reason to me.


Seems like a pretty real and legitimate reason to me.

Sure, but HTTPS doesn't provide it. I mean, not really. HTTPS is the flag by which browser manufacturers have toggled the behaviour.


HTTPS doesn't directly provide it, sure, but it's still a benefit you get when using HTTPS.

This is just pedantic.


> T-Mobile degrades video quality

SSL cannot prevent this. The connection can still be slowed down and video quality will be automatically degraded. They don't reencode streams on the fly or anything, just degrade bandwidth and let the applications do the rest. The reencoding and DRM cracking needed to do the former would be an insane investment of man-hours and CPU resources, even if they didn't use SSL.

Additionally a bunch of these "reasons" are just artificial restrictions imposed by web browsers. If it weren't for letsencrypt providing free certs without any ID checks like other free cert places I'd be complaining about it.


Yeah, I think in this case T-Mobile was re-encoding anything in HTTP, so the stream appeared to be full quality but wasn't. Simple QoS would keep the quality down and the bandwidth low, but hopefully it would be more apparent that a better quality stream exists but is not being delivered.


Do you have a source for that? The article you linked doesn't say that, and https://www.eff.org/deeplinks/2016/01/eff-confirms-t-mobiles... also confirms that no encoding is taking place.

Separately, the article you linked from techdirt has a major misunderstanding of the EFF piece.

It says "EFF also discovered that T-Mobile's earlier statement that it can't detect encrypted video is also misleading, as the company now claims it can"

But the EFF part quoted is talking specifically about http, not https, as a close reading will reveal.

Also, the EFF test is a bit inconclusive, because https might be slower than http for other reasons. It probably doesn't account for the large differences they observed, but a better study would have tested http/https on other carriers, and on a T-mobile phone with Binge On turned off, to isolate the Binge-on effect.


If you look at the chart about halfway down, the "Normal" (red) is HTTPS, and the "Binge-on" (blue) is HTTP.


Exactly. The part about detecting video is specifically binge on/http, and the techdirt article completely misunderstands it and claims it's talking about encrypted video.


Yes but they could easily add the ability to do this over HTTPS too by using SNI or knowing the CDN IPs.


In other HN threads it seems people figured out that Tmo really was just throttling anything that looked like video and relying on the app or service to switch to lower quality.


With SSL how can they know it is a video? At best they can either slow down everything that takes a while to download or they can selectively harrash some of the video services, which might get them negative attention in Washington.


A few options:

- The SNI extension can tell you the hostname you're connecting to even over SSL, just like the Host: http header they're already using. This is visible in the clear during the SSL handshake. SSL connections do not mask what you're connecting to at all, nor do they attempt to.

- They selectively pick the video service's CDN IPs and slow them down


you can probably have a pretty good guess based on the behaviour of the stream, and the size of the packets involved

also the ip...


Where is a good site for getting an introduction on what HTTPS covers. Is the URL completely exposed? Does your ISP/third-parties know the domain, the subdomain, the path, and parameters in each URL?


Think of it as occuring between TCP and HTTP. So you browser establishes a network connection to example.org, then negotiates to speak SSL/TLS, and then all the web things happen over the top of that (there are minor exceptions for vhost TLS configs).

So no web traffic should traverse in the clear - the URL, domains, GET data, POST data, headers, etc are protected.

however that does leave a surprising amount of info. Your IP address, the IP address you're connecting to, the host domain you're connecting to if your browser just did a DNS lookup (99% likely it did), how long you communicate with that site, the traffic shape, etc etc.

As a crude example: your ISP sees a DNS lookup to pornhub.com, followed by a connection to the IP address associated with that name, followed by 3 hours of fairly large packet sizes in regular repetitive patterns. It's highly likely they can extrapolate what's in your browser.


What I understand is that URL is encrypted, including path and parameters in URL and in POST requests. However, the IP address is public, and it's probably quite easy to link an IP address to a website.

There are several websites that offer this service: search an IP and get a list of all known websites that use that IP.

A link to an article from an authority in this field, that really explains this, would be really welcome!!!


This used to be correct, but modern SSL clients support SNI (Server Name Indication), which involves sending the exact hostname in the clear to allow vhosts to have different certificates. The path and everything else in the actual HTTP request is still private, though.


IP and port are required for routing, and as such aren't protected. The hostname is (usually) exposed through DNS, and is required for the server to know which certificate to send.

Also, usually (almost) everything is exposed on the first request, before the server has got the chance to redirect to the encrypted version. At that point, the ISP/interceptor could just replace the redirect...


> IP and port are required for routing, and as such aren't protected.

> The hostname is (usually) exposed through DNS, and is required for the server to know which certificate to send.

SNI solves this problem.

> Also, usually (almost) everything is exposed on the first request, before the server has got the chance to redirect to the encrypted version. At that point, the ISP/interceptor could just replace the redirect...

Admittedly, this is somewhat of a culture issue. I haven't read the DNSSEC spec, but it'd be cool to be able to specify redirects using DNS or smth. But at the end of the day, after that first connection (or preferably, it's already setup with HSTS preinitialized in your browser) you're never not going to be using HTTPS without scary warnings (assume the site admin has set up TLS properly).


Only the IP of the host server has to be known in HTTPS, the path and and other things are sent over an encrypted channel. However it's unlikely you're using Tor or DNSSEC, so the DNS lookup your browser does is also done over the clear, so the domain you're accessing is also probably compromised.


>T-Mobile re-encodes videos, degrading quality

The link goes to an article that says T-mobile didn't reencode at all, but was throttling.

I don't think there's any evidence that T-mobile re-encoded video from non binge-on providers, so they don't belong in that list. (And if someone's a partner, they want to help T-mobile do whatever it is they're doing, so using https to prevent that makes no sense.)


HTTPS is especially important for Tor users. With tor, all HTTP requests pass through a random exit node, who may be malicious. If the site is not using HTTPS, then the exit node can inject malicious javascript into the page. By deploying HTTPS, you are also helping tor users to browse the web more securely.


> This is effective enough to the point where nefarious carriers have asked users to install configuration profiles to add the carrier's root certificates, which are used to modify and inspect what should be private traffic.

It makes me wonder if the NSA isn't also whispering "that would be nice" into the carriers ears.


Let's imagine I am the swimming pool around the corner or the guy wanting to go to the swimming pool.

Why would we need https?

Who is gonna tamper the "advertisement less" schedules of the swimming pool from rue st Hubert (montréal)? Why would they?

Why would the swimming pool care to pay X$/month for this? And why could I care that anyone knows I am looking at the opening and closing of my swimming pool?

I am an old man, I don't watch porn or pick up on girls anymore or think of revolutions.

I clearly need no https ... Why should I pay directly or indirectly for it?


Who is going to tamper with nonsensitive pages? People injectibg, e.g., malware-loading "your computer has a virus" popups.


Also provides heartache and headaches to get working properly.


I can only add to the chorus here. Recently used LE to generate certs for my servers running on FreeBSD, which to date is really only semi-supported by LE.

To my surprise, getting certs was quite easy. Just needed to run the LE utility, answer a few questions, like the domains to be included, and that was it. Took a few seconds at most.

The only other task was pointing nginx servers to the LE certs location, edited the config files in a minute or so. Of course that only needs to be done one time.

Using the early-stage LE tools is already quite painless, I expect fully automating the process will be developed in the coming months. But even with the "primitive" tools available now, it's hardly a burden spending a few seconds every 3 months to get the tremendous benefit provided.


And you can (should) probably set up a cron job to do that every 2 months!


It's easy to get a good configuration for popular servers from https://mozilla.github.io/server-side-tls/ssl-config-generat... and certificates are really easy to get from Let's Encrypt, both with the official client or others such as acme-tiny or simp_le if you don't like the official one. Headaches are not an excuse any more.


How is HTTPS difficult to set up?


With AWS's new certificate manager it makes about 3 clicks and 2 minutes. Otherwise I agree, it's a pain in the ass.


It takes about 10 minutes to update nginx's configurations to use sane defaults with TLS and CSP to boot.


Fair point. I usually terminate on an ELB.


and significantly lower ad revenue for ad-supported sites.


How so?


Some of the ad networks don't have https support yet. Though this is really the fault of the ad networks, and since they want customers, will probably be resolved soon.


That honestly sound like they aren't interested in being in business for long then. I mean how can you not have SSL support when the major businesses will be your customers?


The IAB is promoting a "LEAN" ads: Light, Encrypted, Ad choice supported, Non-invasive ads. Given that all of these recommendations go against the current trajectory of advertising, I doubt they will be adopted any time soon.

http://www.iab.com/news/lean/


Funny that this article about HTTPS gets a Red X on the https in my Chrome.


Hi Jim! We're confident the site is properly configured https://www.ssllabs.com/ssltest/analyze.html?d=certsimple.co..., but if you email me mike@certsimple.com with details of any errors I'll happily check it out.


Yeah, it also provides stable profit to certification "authorities".


As _ikke_ mentioned, it's as if you entirely missed the HN discussions about LE.

https://en.wikipedia.org/wiki/Let's_Encrypt

You can also search HN and GitHub to discover the numerous features and tools.


Let's Encrypt is essentially shareware. The basic features are available for free, but if you want something better than that, such as wildcard certificates, you must pay up. Not to mention that it's cross-signed with IdenTrust, a commercial CA, and since everyone on the web has to use this service to get HTTPS (there aren't other decent alternatives), this is a good way for IdenTrust to boost its numbers and be able to claim it's "the biggest CA in the world" or something. There's no way to choose a different CA with Let's Encrypt.


- Why should I care about the backing CA as long as all browsers accept it?

- You can register up to 100 subdomains in a certificate and up to 5 certificates per domain/week. With their certificate expiry set at 3 months, that comes out to about 6,000 subdomains you can register. I know there are some setups where every user gets a subdomain. But I'd wager that less than 1 in 10000 LE users would run into these limits. (which may actually be lifted when the beta period ends).

- LE doesn't offer any paid certificates so it's really hard to spin this as a shareware-style incentive to spend money with them.


>Why should I care about the backing CA as long as all browsers accept it?

https://en.wikipedia.org/wiki/DigiNotar

>You can register up to 100 subdomains in a certificate and up to 5 certificates per domain/week.

What if I want dynamic subdomains for my website? These limitations are absurdly low.

>LE doesn't offer any paid certificates

They don't, but their partners do. If HTTPS becomes ubiquitous and necessary to host a website, they stand to gain a lot.


> Let's Encrypt is essentially shareware. The basic features are available for free, but if you want something better than that, such as wildcard certificates, you must pay up.

First off, it's free software and it's an open community. LetsEncrypt is trying to get wildcard certificates, but just because they haven't yet convinced CAs to allow them to have wildcard certificates doesn't make it anywhere close to shareware bullshit.

> Not to mention that it's cross-signed with IdenTrust, a commercial CA, and since everyone on the web has to use this service to get HTTPS (there aren't other decent alternatives), this is a good way for IdenTrust to boost its numbers and be able to claim it's "the biggest CA in the world" or something. There's no way to choose a different CA with Let's Encrypt.

This is FUD. Why do you care about IdenTrust beating their chest? Why do you want to use another backing CA? What possible benefit does that provide? The whole CA thing is fucked anyway (in terms of the massive centralisation), so changing the backing CA won't do much.

Also, to be fair, they did pull the trigger on LE, so it's not fair to get all pissy when they take credit for their decision.


I just set up a free ssl cert via AWS[0] yesterday for serving static assets and securing an API Gateway endpoint (previously was using a free Lets Encrypt[1] cert). So hopefully there will be more ways around certification authorities who charge.

0: http://aws.amazon.com/certificate-manager/

1: https://letsencrypt.org/


Not with initiatives like letsencrypt.


Chrome and Firefox should remove the root certificate of any provider that charges for SSL certificates. Not because 10 usd is much, but because it prevents the web from being open to people in third-world countries or those who don't want to (or can't) run their own servers (and so can't use lets encrypt).


You don't need to run your own server to use Let's Encrypt. You probably do want some sort of automated method for updating the cert due to the low expiration time (though it isn't mandatory), but that can be done in many ways such as a desktop application, or just having the shared hosting service add built-in support for Let's Encrypt, as Dreamhost has already done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: