"Constant time" algorithms isn't really about the time it takes. It's more important that they exhibit no observable side effects of a branch. This can be power usage, memory usage as well as time.
For instance, a multiply might take slightly more power than an add instruction and that can be monitored.
If you think these attacks are unreasonable, recently there was a post on HN about using the LED of a smart card reader to detect the fluctuations in power usage to gain information about the secret key. These attacks are real
People keep finding serious architectural leaks in CPUs like Spectre/Meltdown, which makes me question whether any constant-time implementation can really be without observable side effects.
And one protection against that is to map the key into another space using a random (or close enough) key for that transformation, perform the calculation homomorphically, then transform back.
This is often too expensive, but it does come up as a possibility in some zero knowledge protocols.
For instance, a multiply might take slightly more power than an add instruction and that can be monitored.
If you think these attacks are unreasonable, recently there was a post on HN about using the LED of a smart card reader to detect the fluctuations in power usage to gain information about the secret key. These attacks are real