Zero-Copy Buffer Manipulation: Parsing Market Data at Memory Speed
Price feeds hit your server. About forty thousand messages per second, each one carrying an order ID, timestamp, price and quantity. Node.js receives them over TCP, and before your app logic gets t...

Source: DEV Community
Price feeds hit your server. About forty thousand messages per second, each one carrying an order ID, timestamp, price and quantity. Node.js receives them over TCP, and before your app logic gets to touch them, you're already racking up a cost, and several times over as well. Understanding what that cost is, and how to stop it, is what separates a node.js service that struggles under load from one that holds up. What happens when data arrives? When a TCP packet lands, your OS writes it into a kernel buffer. libuv (makes Node not single lane), copies that packet from the kernel buffer to a Node buffer. The data event fires for one. So far, so good. Most code then does something like this: That is fine. Under load, it's a problem. chunk.toString allocates a new String, fire for two, JSON.parse goes through that string for each character, allocating new Objects for each key-value pair, fire three, four, five, and counting. You get the gist. Each allocation goes on the V8 heap and each obj