Probably the worse piece of advice in the article :
> 6.4 – Avoid mathematical notations in your variable names
Let’s say that some quantity in the algorithm is a matrix denoted A. Later, the algorithm requires the gradient of the matrix over the two dimensions, denoted dA = (dA/dx, dA/dy). Then the name of the the variables should not be “dA_dx” and “dA_dy”, but “gradient_x” and “gradient_y”. Similarly, if an equation system requires a convergence test, then the variables should not be “prev_dA_dx” and “dA_dx”, but “error_previous” and “error_current”. Always name things for what physical quantity they represent, not whatever letter notation the authors of the paper used (e.g. “gradient_x” and not “dA_dx”), and always express the more specific to the less specific from left to right (e.g. “gradient_x” and not “x_gradient”).
Especially when you're just starting out, creating your own naming scheme just creates more opportunities to do something wrong.
Have to disagree. Those derivatives are a bad example of a good point. Most mathematical symbols aren't representable in code, at least until we're able to use unicode identifiers and sub/superscripts in every language. When you're forced to write 'theta' instead of θ then you might as well just say 'angle' so your future maintenance programmer will have an easier time of it.
An equation and an algorithm might achieve the same result but the ways they get there are so different that using different notation styles makes perfect sense. For example, a simple finite summation is a compact block in mathematical notation but it's a multi-line for loop in C. Trying to force the constraints of the 'source' notation on the implementation makes no sense.
Remember, dA/dx means 'gradient'. You're not creating your own naming scheme, you're translating the concept of 'gradient' to the appropriate notation for the medium you're working in.
>Most mathematical symbols aren't representable in code, at least until we're able to use unicode identifiers and sub/superscripts in every language.
If the Linux compose key supported more of the common mathematical symbols, I would have so much trouble not using them in all of my JS code. It's already hard not to use names like â to denote unit vectors.
As a side note, I really want to make a JS library called "Eta" for creating progress bars (puns!), where the global namespace is under Η (the Greek letter), but I think that might piss people off, even if I did allow the visually identical H as an alias.
I think you're generally right. There are exceptions, but in many domains there are standard notations for often-encountered quantities.
You say "F" for forces, "p" for probabilities, "x" for machine learning inputs, "y" for classes, etc. In some cases, you might want to use English terms for these things, but usually you'd want to use the standard letters. Especially if the source paper that you're following uses some flavor of standard notation. (So many standard notations to chose from!)
Of course, coming up with universal rules for naming things is impossible. The advice would probably have been better left out.
> 6.4 – Avoid mathematical notations in your variable names Let’s say that some quantity in the algorithm is a matrix denoted A. Later, the algorithm requires the gradient of the matrix over the two dimensions, denoted dA = (dA/dx, dA/dy). Then the name of the the variables should not be “dA_dx” and “dA_dy”, but “gradient_x” and “gradient_y”. Similarly, if an equation system requires a convergence test, then the variables should not be “prev_dA_dx” and “dA_dx”, but “error_previous” and “error_current”. Always name things for what physical quantity they represent, not whatever letter notation the authors of the paper used (e.g. “gradient_x” and not “dA_dx”), and always express the more specific to the less specific from left to right (e.g. “gradient_x” and not “x_gradient”).
Especially when you're just starting out, creating your own naming scheme just creates more opportunities to do something wrong.