Lines Matching defs:latency
3 * The Kyber I/O scheduler. Controls latency by throttling queue depths using
68 * Default latency targets for each scheduling domain.
89 * to the target latency:
91 * <= 1/4 * target latency
92 * <= 1/2 * target latency
93 * <= 3/4 * target latency
94 * <= target latency
95 * <= 1 1/4 * target latency
96 * <= 1 1/2 * target latency
97 * <= 1 3/4 * target latency
98 * > 1 3/4 * target latency
102 * The width of the latency histogram buckets is
103 * 1 / (1 << KYBER_LATENCY_SHIFT) * target latency.
107 * The first (1 << KYBER_LATENCY_SHIFT) buckets are <= target latency,
116 * We measure both the total latency and the I/O latency (i.e., latency after
130 * Per-cpu latency histograms: total latency and I/O latency for each scheduling
284 /* Sum all of the per-cpu latency histograms. */
298 * Check if any domains have a high I/O latency, which might indicate
324 * necessarily have enough samples to calculate the latency
341 * If this domain has bad latency, throttle less. Otherwise,
344 * The new depth is scaled linearly with the p99 latency vs the
345 * latency target. E.g., if the p99 is 3/4 of the target, then
623 u64 target, u64 latency)
628 if (latency > 0) {
630 bucket = min_t(unsigned int, div64_u64(latency - 1, divisor),