We made a few changes to how we handle larger network payloads (ObjectCreate & Refresh Messages). Most of the gains come from improvements to our compression pipeline, reducing CPU and bandwidth usage on client and server:
- Server were compressing broadcasted messages once per connection instead of once per message, resulting in unnecessary CPU load.
- We now strip null values from JSON before compressing. Doesn't reduce the payload size a lot, but noticeably cuts compression/decompression overhead for both client & server.
- We used to chunk large payloads first and then compress each chunk individually. Flipping that around turns out to make a pretty big difference, especially on the client side.
Here are some cherry picked benchmarks (~1MB payload 1000 GOs & 2000 components):
Compress with nulls 0.210 ms/op (old)
Compress without nulls 0.165 ms/op (new, 1.27x faster)
Decompress with nulls 0.232 ms/op (old)
Decompress without nulls 0.074 ms/op (new, 3.13x faster)
Server chunk-first: 0.85 ms/op (old)
Server compress-first: 0.88 ms/op (new, ~same)
Client chunk-first: 1.16 ms/op (old)
Client compress-first: 0.34 ms/op (new, 3.4x faster)
Joke aside, nice devblog, I really appreciate this change for the keyboard layout and the improvements to shadows especially !