Performance

hmbdc is an ultra low latency / high throughput middleware, what kind of latency performance should you expect from hmbdc on a typical hardware configuration nowadays?

Anyone who does not give you an answer of "That all depends ..." is not giving you the complete performance picture. However, we can still provide you with some guidelines and tool of evaluating hmbdc's performance - read on...

What is hmbdc reliable IPC and network messaging latency performance compared with qperf results?

qperf is a widely used and use and accurate network latency measurement tool on Linux, thus it serves here as a standard to evaluate hmbdc's delivery latency numbers and here are the results. 

Here are test environment:
- Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz 8 core 16G RAM, CentOs8
- Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz, Ubuntu 18.04
- 1G CISCO SG200 switch (intel 82574L and 82579LM NIC)

All tests on the hmbdc side are performed via the tips-ping-pong tool come with the release. It measure the ping-pong round trip time (rtt) to calculate the one way latency (rtt / 2).

  • hmbdc IPC regular, 0cpy vs qperf



qperf's latencies via the loopback TCP interface increases as the size of the message increases from 100 bytes to 10MB.
The regular hmbdc IPC delivery show better performance compared to qperf before the message reaches thousands of bytes.
As expected, the hmbdc zero copy shared memory, which is recommended for large messages, can deliver a message in hundreds of nano-second regardless of the message size (100B-10MB).

  • hmbdc tcpcast, rmcast vs qperf


hmbdc tcpcast and rmcast matches qperf's latency numbers closely with rmcast performs slightly better than qperf at 1000B-10KB range.