This is a genuine question - do you really believe the (to me, seemingly obvious and much greater and potentially more dangerous) costs of switching away from astronomical time for measuring something inherently astronomical like the length of a year are smaller than the costs of removing the occasional time-handling bug in software?
Absolutely.
As http://queue.acm.org/detail.cfm?id=1967009 notes, fear of encountering leap second bugs ALREADY causes many factories to schedule downtime around leap seconds because of how serious the potential consequences of not doing so could be. It is just a question of time until someone who had such a bug doesn't realize it, doesn't schedule downtime, and suffers the consequences.
And what's the consequence of eliminating leap seconds? In the next few decades the Earth and our clocks will drift out of alignment by about 1% of the amount that they are already forced to be out of alignment for most of us by the fact that time zones are an hour wide. Which very few people will notice.
Even in your example, Linux machines didn't actually crash - some processes spinlocked. And it wasn't even all Linux machines, if you were running a mildly old kernel (say, the one that came with debian etch and derivatives) nothing happened whatsoever.
If the bug had been introduced during a longer period without leap seconds - not long ago there was one that lasted 7 years - then its impact would have been more widely felt. And it is true that it wasn't technically a crash, but many websites did suffer outages because internal services stopped responding. In common usage that was a crash.
Beyond that, there is no shortage of time standards that are both based on SI seconds and free of leap seconds - Terrestrial Time, International Atomic Time, GPS Time come to mind...
UTC is the standard for time in C, all languages derived from C (eg Java), and all languages written in C (eg Perl, Python, PHP, etc). It is also the standard in all Unix operating systems (including OS X), POSIX, Linux in all variations including Android, and has even been adopted by Microsoft. It is also the standard in widely used protocols written on top of that, like HTTP, and inside of any software built on top of them, such as web browsers.
Yes, there are other definitions of time that you can use. But UTC as published by NIST and distributed by the NTP utility is "the time" as far as the computer world cares. If you use anything else, people will ask - repeatedly - why you have the wrong time.
Absolutely.
As http://queue.acm.org/detail.cfm?id=1967009 notes, fear of encountering leap second bugs ALREADY causes many factories to schedule downtime around leap seconds because of how serious the potential consequences of not doing so could be. It is just a question of time until someone who had such a bug doesn't realize it, doesn't schedule downtime, and suffers the consequences.
And what's the consequence of eliminating leap seconds? In the next few decades the Earth and our clocks will drift out of alignment by about 1% of the amount that they are already forced to be out of alignment for most of us by the fact that time zones are an hour wide. Which very few people will notice.
Even in your example, Linux machines didn't actually crash - some processes spinlocked. And it wasn't even all Linux machines, if you were running a mildly old kernel (say, the one that came with debian etch and derivatives) nothing happened whatsoever.
If the bug had been introduced during a longer period without leap seconds - not long ago there was one that lasted 7 years - then its impact would have been more widely felt. And it is true that it wasn't technically a crash, but many websites did suffer outages because internal services stopped responding. In common usage that was a crash.
Beyond that, there is no shortage of time standards that are both based on SI seconds and free of leap seconds - Terrestrial Time, International Atomic Time, GPS Time come to mind...
UTC is the standard for time in C, all languages derived from C (eg Java), and all languages written in C (eg Perl, Python, PHP, etc). It is also the standard in all Unix operating systems (including OS X), POSIX, Linux in all variations including Android, and has even been adopted by Microsoft. It is also the standard in widely used protocols written on top of that, like HTTP, and inside of any software built on top of them, such as web browsers.
Yes, there are other definitions of time that you can use. But UTC as published by NIST and distributed by the NTP utility is "the time" as far as the computer world cares. If you use anything else, people will ask - repeatedly - why you have the wrong time.