Looks like a neat tool, and a valuable addition to the network bandwidth/latency/performance toolkit.
(Admittedly, it has been a while since I have done it) It did feel odd that there was not a mention of time discipline for the client/server and the impact on finer-grained stats. Perhaps, at least, mention that NTP (or, ideally, PTP, but that is a fair bit more involved) is strongly recommended to be running and stable (NTP's own jitter being low)?
Any possibility of using it with diodes? I've used iperf for this (two separate instances), but iperf2 doesn't support this. We've also written our own in Rust, tightly coupled to our needs, but if there are better tools out there, well, we'd be silly not to investigate. Thanks!
Currently tcpulse only supports a `pingpong` mode, so true one-way (“diode”) transfers aren’t available out of the box. That said, the design is simple enough that you could introduce an option or wrapper to lock the flow in one direction and implement a diode mode. Happy to review a PR if someone wants to add it
Thank you!
No, tcpulse only measures and prints out metrics from the client side, not the server side. However, since multiple clients each output their own measurement results, this effectively results in performance metrics being output for each peer.
iperf3 is a link “speedometer” – spin it up between two hosts, crank -P or -u -b, and it tells you max TCP/UDP throughput (and jitter/loss if you like).
tcpulse is a fine-grained traffic “microscope” – you dial exact CPS or concurrent sockets, spray dozens of targets from one client, and get p90/p95/p99 latencies per flow.
Use iperf3 for a quick bandwidth check; use tcpulse when you need repeatable, controlled connection patterns and detailed latency stats across many backends.
Looks like a neat tool, and a valuable addition to the network bandwidth/latency/performance toolkit.
(Admittedly, it has been a while since I have done it) It did feel odd that there was not a mention of time discipline for the client/server and the impact on finer-grained stats. Perhaps, at least, mention that NTP (or, ideally, PTP, but that is a fair bit more involved) is strongly recommended to be running and stable (NTP's own jitter being low)?
Any possibility of using it with diodes? I've used iperf for this (two separate instances), but iperf2 doesn't support this. We've also written our own in Rust, tightly coupled to our needs, but if there are better tools out there, well, we'd be silly not to investigate. Thanks!
Currently tcpulse only supports a `pingpong` mode, so true one-way (“diode”) transfers aren’t available out of the box. That said, the design is simple enough that you could introduce an option or wrapper to lock the flow in one direction and implement a diode mode. Happy to review a PR if someone wants to add it
Cool little tool! Is it possible for the server to also print out performance metrics of each peer which it is connected to?
Thank you! No, tcpulse only measures and prints out metrics from the client side, not the server side. However, since multiple clients each output their own measurement results, this effectively results in performance metrics being output for each peer.
Any idea how this differs from iperf?
iperf3 is a link “speedometer” – spin it up between two hosts, crank -P or -u -b, and it tells you max TCP/UDP throughput (and jitter/loss if you like).
tcpulse is a fine-grained traffic “microscope” – you dial exact CPS or concurrent sockets, spray dozens of targets from one client, and get p90/p95/p99 latencies per flow.
Use iperf3 for a quick bandwidth check; use tcpulse when you need repeatable, controlled connection patterns and detailed latency stats across many backends.