The majority of users think of "speed" when it comes to bandwidth. In reality, the "speed" they're referring to is available capacity. Popular speed tests simply fill the connection with as much data as will fit and report the peak number on completion.
But real applications don't work that way. They open TCP connections, negotiate window sizes, and transfer data in a sustained flow governed by round-trip time. The actual throughput a user experiences is determined by latency, not line rate.
A speed test may say 1 Gbps. In practice, the throughput a single session achieves will be much, much less. And that's the number that defines the user experience.
Equilibrium is the point where throughput reaches the maximum achievable rate for a given connection, as defined by latency. A healthy connection should reach equilibrium and sustain it, even when multiple users are active.
MCS tests whether your connection can reach equilibrium and maintain it consistently. If it can't, the test data shows you exactly why: high latency, delay spikes, retransmissions, or contention.
This is what RFC 6349, the IETF standard for TCP throughput testing, was designed to measure. MCS implements RFC 6349 to give you results that reflect real application behavior, not marketing numbers.
MCS doesn't just report a single speed number. It measures the underlying metrics that explain why throughput is what it is, and what's limiting it.
The sustained throughput rate your connection actually delivers. Not peak bursts. The real number, measured over time, that applications depend on.
Min, average, max, and consistency. Latency directly caps throughput. A 50ms RTT halves maximum throughput compared to 25ms, regardless of bandwidth.
How stable is throughput over the test duration? Delay spikes and jitter cause retransmissions that destroy sustained performance. MCS measures the variance.
The negotiated data window determines how much data can be in-flight at once. MCS reports actual window utilization against the theoretical maximum.
Packets that had to be resent due to loss or delay. Each retransmission reduces effective throughput and indicates network-level problems.
Test with multiple simultaneous users to see how throughput degrades under load. Know how your connection performs when the whole office is online.
Bidirectional testing measures both directions independently. Asymmetric connections often have very different quality characteristics in each direction.
Spikes in delay beyond the baseline trip time trigger retransmissions and reduce throughput. MCS charts delay variation over the entire test duration.
The ratio of useful throughput to the connection's theoretical maximum. Shows how much of your purchased bandwidth is actually being utilized.
MCS follows the IETF RFC 6349 framework for TCP throughput testing. This means the results reflect how real applications behave on your network, not how fast data can burst in ideal conditions.
MCS measures the connection's round-trip time and calculates the theoretical maximum throughput based on latency and TCP window size. This is the ceiling your connection can't exceed.
Sustained TCP transfers run between test points: your server, satellites, or browser-based clients. MCS measures actual throughput over time, tracking whether equilibrium is reached and how consistently it's maintained.
If throughput doesn't reach the theoretical maximum, MCS shows you why: high latency, delay variation, retransmissions, TCP window limits, or contention. Actionable data, not just a number.
Understanding the difference between what a speed test tells you and what MCS reveals is the key to solving bandwidth quality problems.
| Typical Speed Test | MCS Bandwidth Quality | |
|---|---|---|
| What it measures | Peak burst capacity | Sustained equilibrium throughput |
| Test method | Flood the pipe, report the peak | RFC 6349 TCP throughput framework |
| Latency impact | Ignored or reported separately | Directly correlated with throughput results |
| Multi-user simulation | No | Yes, configurable concurrent sessions |
| Consistency tracking | No, single snapshot | Yes, throughput stability over duration |
| Retransmission detection | No | Yes, identifies quality-driven resends |
| Bidirectional | Sequential up/down | Independent up and down with full metrics |
| Test endpoints | Shared public servers | Your own satellites at real destination points |
| Automated scheduling | No | Yes, continuous 24/7 from any satellite |
| Result storage | Ephemeral or third-party | Up to 1 billion results on your server |
Validate that WAN links, MPLS circuits, and SD-WAN paths deliver the throughput your applications need, not just the bandwidth your contract promises.
Prove SLA compliance with RFC 6349 data. Show customers what their connection actually delivers under sustained load, and resolve disputes with evidence.
Your application's perceived performance depends on the connection beneath it. Help customers validate their throughput so they stop blaming your platform.
Use real throughput data to justify upgrades, diagnose complaints, and demonstrate the value of network improvements with before-and-after evidence.