Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> https://github.com/joyent/node/commit/085dd30e93da67362f044a...

Looks like a flood of concurrent requests will just fill up the memory



I was considering using Node.js for a new project, but I quickly backed away from it when I realized it didn't do basic flow control properly.


So you started considering Node.js, and immediately started reading the http parsing code?

...joking aside, I'm curious what you saw that made you realize basic flow control was broken?


So you started considering Node.js, and immediately started reading the http parsing code?

Well, there's no way to know whether something is reliable for your purposes unless you understand how it works... Is it uncommon to read code?


It struck me as an oddly specific part of a large codebase to immediately jump to, but in retrospect it makes sense.

I'm all for reading code, but I think that if you can get away with not reading code, it's actually a good sign - it means all the abstractions are holding up. jgreen10 probably didn't go off and start reading the assembly code that is generated when you compile Node.js. Again, I agree with your larger point. I just want to be careful before snubbing my nose to those who don't always read up on modules they use before using them.


Is it uncommon to read code?

I didn't think so, but it's starting to seem that way.


Funny I noticed this on an internal server as well but chucked to an older version. Hoping it "clearly is fixed in latest code, something so glaringly obviously broken, wouldn't be hanging around too long with all the hype surrounding node these days..."

Anyway, I wouldn't stick node out exposed to the outside world. Granted sticking nginx in front presumely won't help with this issue. Just keep feeding a 4GB file to it and it will crash the back-end [EDIT: n.m. I am not sure anymore, someone mentions it is possible to mitigate it that way]

Yikes, this is a bad one. Glad they fixed it. But it leaves me with the same impression I had after finding out how MongoDB used to have unacknowledged writes turned on by default, and people's data was silently getting corrupted.


Granted sticking nginx in front presumely won't help with this issue. Just keep feeding a 4GB file to it and it will crash the back-end.

Why does that happen? nginx can't help here?


See mathrawka's reply (I haven't test it though)

BTW, I just ran memory of my server into swap with this:

    $ dd if=/dev/zero of=2g bs=1M count=2048

    $ curl -F "2g=@2g" <myresource>
(EDIT: explanation, this creates 2g file then uploads it <myresource> as a file upload -- multipart mime. @ sign just insert the named file data into the form)


And that is why you should never blindly use bodyParser middleware in production...

http://andrewkelley.me/post/do-not-use-bodyparser-with-expre...


Wouldn't client_max_body_size (which is set to 1MB by default) in nginx config prevent the 4GB to even reach the node backend ?


Exactly how did you realize that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: