>> So much software depends on implicit overflow...
I use it all the time in embedded systems. It's very handy to represent angles as signed 16bit integers with a scaling of 2^15 = pi radians. The "overflow" naturally wraps around from pi to -pi right where you want it. I do consider this is a bit of a special case, but it's one that I've come to rely on where it makes sense.
I like Zig's approach here: normal arithmetic had undefined overflow behavior, but if you want overflow there's the overflow arithmetic operators +% -% *% and ÷%, and I think saturating arithmetic operators (+| -| *| ÷|) just landed in the master.
Dedicated operators is OK, but IME the semantics are value-level e.g. one value is overflowing while the other is saturating. In that sense, I think Rust' `Wrapping<T>` is really nice though the ergonomics are not ideal.
> It's very handy to represent angles as signed 16bit integers with a scaling of 2^15 = pi radians.
And it is in incorrect as *signed* integer overflows are undefined behavior (at least in c++). It's only defined for unsigned integers. It's not a theoretical bugs, modern compiler optimizers assume that overflows never happen.
You can build with `-fsanitize=undefined` then run to find all those bugs.
>> And it is in incorrect as signed integer overflows are undefined behavior (at least in c++).
I'm well aware that it's undefined behavior, but every compiler I've used it on produces "correct" results. The reason it's undefined is probably related to the prevalence of computers that did not use 2's complement arithmetic 40 years ago. In practice it's very reliable today. It seems like they could eliminate this piece of UB from the standard simply by defining it to behave the way everyone does it.
You are probably well aware, but this is a very contentious approach. The problem isn't that the math is going to be changed so that the wraparound doesn't occur, but that the compiler is (eventually) going to do a larger scale optimization that silently removes your check to see whether a wrap around has occurred. Compiler writers argue that the such optimizations are necessary to get good performance, so it's unlikely to ever be become defined behavior.
While you might continue to get away with it, and while I agree with you that the standard should be changed, I personally think you are being extremely foolish by relying on this sort of UB in anything other than toy programs that you expect to compile once and verify for correctness. The fix of using "unsigned" for cases where you expect wraparound is so simple that you should switch your approach.
Embedded systems compilers already have a loose regard for the compiler spec. If you're only building for one system, you'll only use one compiler and the reality of what it does beats the theory of what it should do.
>> The problem isn't that the math is going to be changed so that the wraparound doesn't occur, but that the compiler is (eventually) going to do a larger scale optimization that silently removes your check to see whether a wrap around has occurred.
I don't need to check for wrap around, that's the entire reason for using that particular scaling for an angle measure. Same for calculating signed changes in angle diff = a - b just works.
Compilers have taken advantage of undefined behavior in signed overflow for many years now. So unless you use an very ancient compiler, your code is buggy. You are just lucky if it "appears" to work. What happens is that adding signed integer will appear to work most of the time... except not always. For example, your compiler may detect that 2 numbers A and B are positive (they may come from constants, or there was a previous 'if' that checked that they are positive) and when you multiply `A * B` your compiler will assume that the result is also positive because multiplying 2 positive numbers is always positive if we assume that overflow are not allowed. So if you do `if (`A * B < 0)` the compiler can and probably will remove it entirely (dead code) when turning optimizations on. Here it seems quite obvious but there can be many other less obvious ways in which the optimizer will break your code if you rely on undefined behavior. Don't rely on UB in your code.
About embedded project never upgrading compiler, this is just wrong in general anyway.
In your case, use uint16_t rather than in16_t if you want defined behavior when wrapping with 16 bits.
Hardware converged on two's complement arithmetic for signed arithmetic decades ago. C/C++ has really polluted people's thinking in a profound and devious way.
The real reason that C++ continues to insist on undefined behavior is because compilers optimize code in ways that are incorrect w.r.t. overflow and compiler writers have too much power in C++ standardization. What is the value of such optimizations? Mum.
This particular UB is used for optimisation in the following manner: when you write a loop that iterates over an array using a signed index variable, it doesn't check that the index doesn't overflow. Apparently doing it 'properly' would cost something like 6% performance loss in such code.
Even so, I feel the proper solution would be to use the correct type for array indexing (either one that has the same word size as a pointer, in which case it cannot overflow, or just a special index_t type defined for this purpose). Forcing everything else to have to put up with UB in the least expected places is not, in my mind, a good way to go.
Unfortunately I don't think this battle can be won. They won't even remove the UB from functions like tolower(), even though it would cost precisely nothing...
I've heard many variations of the above, and the solution to almost all of them is loop versioning as appropriate to the particular case. In this case, a single check before the loop would suffice, and then that 6% disappears.
The battle can't be won because of a perspective difference. It's really difficult to communicate the importance of a sane programming model when a sane programming model has not been a priority for C or C++ since their respective inceptions. They've prioritized low-level manipulations and performance over everything else and ironically painted themselves into silly corners like this one; even when you want the hardware instruction that does signed two's complement arithmetic, you can't get it, because the other firmly-held belief that compilers are never wrong.
I use it all the time in embedded systems. It's very handy to represent angles as signed 16bit integers with a scaling of 2^15 = pi radians. The "overflow" naturally wraps around from pi to -pi right where you want it. I do consider this is a bit of a special case, but it's one that I've come to rely on where it makes sense.