hmbdc is an ultra low latency / high throughput middleware, what kind of latency performance should you expect from hmbdc on a typical hardware configuration nowadays?
What about throughput?
Can I trust the above numbers?
How does hmbdc reliable network messaging latency performance compare to other products, for example DDS?
Here are the two hosts (both running CentOs 7) and two pairs of directly linked NIC used in the tests. All tests are executed within the CPU shields on the hosts to reduce jittering.
The parameter setting of DDS tests are from each product's performance test documentation. No fine tuning is done for each test. The same goes for rmcast and rnetmap. The only varying part of parameter setting is message size.
- hmbdc vs dds_a latency comparison
A continuous 24000+ message round trip delays are measured and divided by 2 at 200 msg/sec.
- hmbdc vs dds_b latency and throughput comparison
Both hmbdc's rnetmap and rmcast are significantly faster than dds_b in our tests (26us,49us vs 82us for 50 bytes median)
To take away from the test result, dds_min = rmcast_median = rnetmap_max !
Here are the throughput comparison results. We have seen some throughput results published by other companies based on the network utilization. In our opinion, that is not what our users care most, particularly when the message is small. The reason is that it doesn't take out the message header or key overhead into the equation. They all contribute to the utilization percentage.
In hmbdc, we just measure the user bit per sec. If message rate is 1M/sec and the message size (excluding all the header or key which are bound to present in any useful messages) is 8, then the user bit per sec is 8M*8=64Mbps period!
The above chart shows all 3 products increase the bps results as message size increases - a result of the overhead of the message header per message decreasing.
rnetmap and rmcast perform almost identical and beat dds_b in all of our tests.
Large Network Messages
On the 1G link, tcpcast plateaued at 0.94Gbps and the more scalable rmcast and rnetmap plateaued at 0.92Gbps.