This is interesting, because the size of a file's encrypted chunks now leaks information about the file's plaintext. I suppose you have some minimum chunk size, and that's one way to keep from leaking too much information as a fraction of the overall file size. But if a file is modified many times, it seems to me that you'd have to be very careful not to leak a substantial amount of data to a clever attacker.
Have you thought about how to quantify this tradeoff?
I suppose you could pad each encrypted chunk so they're all the same size, but then if you don't want to waste a ton of space you'd have to restrict your chunking algorithm to output chunks with relatively similar sizes, at which point you lose some of the benefits of chunking.
The chunking is done using parameters generated from a secret key, and I haven't been able to see any way for it to be computationally feasible to extract meaningful information from the resulting block sizes.
That doesn't mean that it's impossible, of course; just that it would require someone smarter than me. ;-)
Have you thought about how to quantify this tradeoff?
I suppose you could pad each encrypted chunk so they're all the same size, but then if you don't want to waste a ton of space you'd have to restrict your chunking algorithm to output chunks with relatively similar sizes, at which point you lose some of the benefits of chunking.