Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How about an additional attribute named "global", "shared", "public", "use-global-cache", share-with="*", etc. that the developer can use to opt in to the behavior?

A site operator would only opt in to the behavior for assets that are not unique to the site.

A second idea would be to wait until several unique domains had requested the asset before turning on the behavior for that asset. (By unique domain I specifically mean the part of the domain that's written in black text in the URL bar, excluding subdomains that are in gray.)

These are two easiest-to-implement solutions I can think of.



  How about an additional attribute named "global", 
  "shared", "public", "use-global-cache", share-with="*", 
  etc. that the developer can use to opt in to the behavior?
Allowing people to opt-in to a cache poisoning vector seems like a bad idea.

  A second idea would be to wait until several unique domains 
  had requested the asset before turning on the behavior for 
  that asset.
This just raises the bar to a cache poisoning attack from "owns one domain name" to "owns a couple". Some gTLDs are $0.99 per year, or free. (The user would only have to visit a single page, which has a dozen other sites open in invisible iframes)


Could someone explain how cache poisoning would work here? The hash is already being verified, I assume that you would not cache a file if the hash doesn't match, the same way that you would reject a file from the CDN if the hash doesn't match.


There's no hash to verify. Bad site preloads a bad script with hash=123. On good site, XSS injects a script src=bad.js hash=123. The browser, makes a request: GET https://good.com/bad.js. BUT! The hash-cache jumps in and says "Wait, this was requested from the script tag with hash=123. I already have that file. No need to send the request over the network." bad.js now executes in the context of good.com.

If the hash-cache wasn't there, then good.com would have returned a 404. There's no hash collision because the request is completely elided (which is a large part of the perf attractiveness).


Thanks for spelling it out so nicely. I was having a bit of trouble coming up with the scenario too.

And as for the "collisions are unreasonable to expect people to generate", remember the use case: these are going to be extremely long-lived hashes.

With the cache poisoning, once you find a collision against jQuery 2.1.1 (to beat the example horse), you can continue to use that against all requests for jQuery 2.1.1. And we know how wide-applicable targets of cryptographic opportunity typically fair against adversaries with substantial brute-force processing resources...


Once SHA-2 is broken, browsers can simply no longer treat those hashes as safe. The spec suggests browsers don't use anything less than SHA384, including MD5.

The impact of SHA-2 failing would be far, far, larger than poisoning jQuery.


Possibly this could be gotten around with by having the good site serve its own idea of what the hash should be. This would be much smaller than the actual resource.


Well that's what you do, via the HTML. But the concern is that CSP treats the HTML served as untrusted. You could fix it by asking the server, but at that point, you're making a request to the server for the resource, killing some of the point of caching. And it seems wrong to have "if-not-hash" headers as part of this proposal; that'd be better off as an improvement to the HTTP caching stuff overall. But putting the verified hashes in the HTTP headers is fine as CSP already relies on the headers having integrity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: