This is a fixed number of retries, a LB are willing to make if the dead node is returned from the underlying distributor.
This is a fixed number of retries, a LB are willing to make if the dead node is returned from the underlying distributor.
For randomized LBs (P2C/LeastLoaded, P2C/EWMA, and Aperture) this value has an additional meaning: it determines how often the LB will be failing to pick a healthy node out of the partially-unhealthy replica set. For example, imagine that half of the replica set is down, the probably of picking two dead nodes is 0.25. If we repeat that process for 5 times, the total probability of seeing 5 dead nodes in a row, will be (0.25 ^ 5) = 0.1%. This means that if half of the cluster is down, the LB will be making a bad choice (when better choice may have been available) for 0.1% of requests.
Please, note that this doesn't mean that 0.1% of requests will be failed
by P2C operating on a half-dead cluster since there is an additional layer
of requeues (see Retries
module) involved above those "bad picks".
The aperture load-band balancer balances load to the smallest subset ("aperture") of services so that:
The aperture load-band balancer balances load to the smallest subset ("aperture") of services so that:
smoothWin
, to each service stays within the load band, delimited
by lowLoad
and highLoad
.2. Services receive load proportional to the ratio of their weights.
Unavailable services are not counted--the aperture expands as needed to cover those that are available.
The user guide for more details.
An efficient strictly least-loaded balancer that maintains an internal heap.
An efficient strictly least-loaded balancer that maintains an internal heap. Note, because weights are not supported by the HeapBalancer they are ignored when the balancer is constructed.
The user guide for more details.
An O(1), concurrent, weighted least-loaded fair load balancer.
An O(1), concurrent, weighted least-loaded fair load balancer. This uses the ideas behind "power of 2 choices" [1] combined with O(1) biased coin flipping through the aliasing method, described in Drv.
the maximum amount of "effort" we're willing to expend on a load balancing decision without reweighing.
The PRNG used for flipping coins. Override for deterministic tests. [1] Michael Mitzenmacher. 2001. The Power of Two Choices in Randomized Load Balancing. IEEE Trans. Parallel Distrib. Syst. 12, 10 (October 2001), 1094-1104.
Like p2c but using the Peak EWMA load metric.
Like p2c but using the Peak EWMA load metric.
Peak EWMA uses a moving average over an endpoint's round-trip time (RTT) that is highly sensitive to peaks. This average is then weighted by the number of outstanding requests, effectively increasing our resolution per-request. It is designed to react to slow endpoints more quickly than least-loaded by penalizing them when they exhibit slow response times. This load metric operates under the assumption that a loaded endpoint takes time to recover and so it is generally safe for the advertised load to incorporate an endpoint's history. However, this assumption breaks down in the presence of long polling clients.
The window of latency observations.
the maximum amount of "effort" we're willing to expend on a load balancing decision without reweighing.
The PRNG used for flipping coins. Override for deterministic tests.
The user guide for more details.
A simple round robin balancer that chooses the next backend in the list for each request.
A simple round robin balancer that chooses the next backend in the list for each request.
WARNING: Unlike other balancers available in finagle, this does not take latency into account and will happily direct load to slow or oversubscribed services. We recommend using one of the other load balancers for typical production use.
the maximum amount of "effort" we're willing to expend on a load balancing decision without reweighing.
Constructor methods for various load balancers. The methods take balancer specific parameters and return a LoadBalancerFactory that allows you to easily inject a balancer into the Finagle stack via client configuration.
The user guide for more details.