In the small, it's still a meritocracy. A patch like this is obviously correct and I expect to get in first try (maybe with a formatting fix by the maintainer).
For large works, the burden shifts, since you are increasing the maintenance load. Now we have the question of who will do the future work, and that requires judgement of the importance of the work and/or the author, and hence is a fundamentally political question.
Either you want fixed point for your minimum unit of accounting or you want floating point because you’re doing math with big / small numbers and you can tolerate a certain amount of truncation. I have no idea what the application for floating point with a weird base is. Unacceptable for accounting, and physicists are smart enough to work in base 2.
I'm pretty confident that dfp is used for financial computation. Both because it has been pushed heavily by IBM (who certainly are very involved in financial industry) and because many papers describing dfp use financial applications as motivating example. For example this paper: https://speleotrove.com/decimal/IEEE-cowlishaw-arith16.pdf
> This extensive use of decimal data suggested that it would be worthwhile to study how the data are used
and how decimal arithmetic should be defined. These
investigations showed that the nature of commercial
computation has changed so that decimal floating-point
arithmetic is now an advantage for many applications.
> It also became apparent that the increasing use of decimal floating-point, both in programming languages and
in application libraries, brought into question any
assumption that decimal arithmetic is an insignificant part of commercial workloads.
> Simple changes to existing benchmarks (which used incorrect binary approximations for financial computations) indicated that many applications, such as a typical Internet-based ‘warehouse’ application, may be spending 50% or more of their processing time in decimal arithmetic. Further, a new benchmark, designed to model an extreme case (a telephone company’s daily billing application), shows that the decimal processing overhead could reach over 90%
Wow. OK, I believe you. Still don’t see the advantages over using the same number of bits for fixed point math, but this definitely sounds like something IBM would do.
Edit: Back of the envelope, you could measure 10^26 dollars with picodollar resolution using 128 bits
Decimal128 has exact rounding of decimal rules and preserves trailing zeros.
I don’t think Decimal64 has the same features, but it has been a while.
But unless you hit the limits of 34 decimal digits of significand, Decimal128 will work for anything you would use fixed point for, but much faster if you have hardware support like on the IBM cpus or some of the sparc cpus from Japan.
OPAP Agg functions as an example are an application.
> I don’t think Decimal64 has the same features, but it has been a while.
Decimal32, Decimal64, and Decimal128 all follow the same rules, they just have different values for the exponent range and number of significant figures.
Actually, this is true for all of the IEEE 754 formats: the specification is parameterized on (base (though only 2 or 10 is possible), max exponent, number of significant figures), although there are number of issues that only exist for IEEE 754 decimal floating-point numbers, like exponent quantum or BID/DPD encoding stuff.
You are correct, the problem is that Decimal64 has 16 digits of significand, while items like apportioned per call taxes need to be calculated with six digits past the decimal before rounding which requires about 20 digits.
Other calculations like interest rates take even more and cobol requires 32 digits.
As decimal128 format supports 34 decimal digits of significand, and has emulated exact rounding, it can meet that standard.
While items is more complex, requiring ~15-20% more silicon space in the ALU plus larger dataset size, compared to arbitrary precision libraries like BigNum it is more efficient for business applications.
I want signal to act as a transport bus. In particular, I want to give certain contacts permission to ask my phone for its location, so I can give my wife that ability without sharing it with Google.
Signal has solved the identity part, now encourage others to build apps on it.
(2fa via Signal would be better than SMS, too, though I know this may be controversial!)
I'm not seeing how you could draw that conclusion. The more likely explanation is that they are telling people not to build apps around it (and I assume thus the apis aren't well designed for adoption by other apps).
> This repository is used by the Signal client apps (Android, iOS, and Desktop)
as well as server-side. Use outside of Signal is unsupported.
Also, where are the test vectors? Because when I implement this, that's the first thing I have to write, and you could save me a lot of work here. Bonus points if it's in JSON and UTF-8 already, though the invalid UTF-8 in an RFC might really gum things up: hex encode maybe?
Andrew Tridgell's KnightCap did this differently: it's a network chess server, and it would dump its data to a file and re-exec. The trick here is that it would keep the (network) fds open for zero downtime. IIRC he used a Perl script called datadumper to gen the code marshal/demarshal the structures.
This has the advantage that reboots can be handled fairly seemlessly too (though there will be reconnections then of).
Eh, you could probably get away with it if you use BearSSL[0]. The only difficulty would be:
These elements can be allocated anywhere in (writable) memory, e.g.
heap, data segment or stack. They must not be moved while in use
(they may contain pointers to each other and to themselves).
Which you could probably get around with by just keeping track of offsets and using mmap
For large works, the burden shifts, since you are increasing the maintenance load. Now we have the question of who will do the future work, and that requires judgement of the importance of the work and/or the author, and hence is a fundamentally political question.
reply