It struck me as an oddly specific part of a large codebase to immediately jump to, but in retrospect it makes sense.
I'm all for reading code, but I think that if you can get away with not reading code, it's actually a good sign - it means all the abstractions are holding up. jgreen10 probably didn't go off and start reading the assembly code that is generated when you compile Node.js. Again, I agree with your larger point. I just want to be careful before snubbing my nose to those who don't always read up on modules they use before using them.
Funny I noticed this on an internal server as well but chucked to an older version. Hoping it "clearly is fixed in latest code, something so glaringly obviously broken, wouldn't be hanging around too long with all the hype surrounding node these days..."
Anyway, I wouldn't stick node out exposed to the outside world. Granted sticking nginx in front presumely won't help with this issue. Just keep feeding a 4GB file to it and it will crash the back-end [EDIT: n.m. I am not sure anymore, someone mentions it is possible to mitigate it that way]
Yikes, this is a bad one. Glad they fixed it. But it leaves me with the same impression I had after finding out how MongoDB used to have unacknowledged writes turned on by default, and people's data was silently getting corrupted.
(EDIT: explanation, this creates 2g file then uploads it <myresource> as a file upload -- multipart mime. @ sign just insert the named file data into the form)
Looks like a flood of concurrent requests will just fill up the memory