package core_kernel

  1. Overview
  2. Docs

Expert operations.

val create_exn : now:Core.Time_ns.t -> hopper_to_bucket_rate_per_sec:float Infinite_or_finite.t -> bucket_limit:int -> in_flight_limit:int Infinite_or_finite.t -> initial_bucket_level:int -> initial_hopper_level:int Infinite_or_finite.t -> t
  • parameter now

    is the reference time that other time-accepting functions will use when they adjust now. It is almost always correct to set this to Time_ns.now.

  • parameter hopper_to_bucket_rate_per_sec

    bounds the maximum rate at which tokens fall from the hopper into the bucket where they can be taken.

  • parameter bucket_limit

    bounds the number of tokens that the lower bucket can hold. This corresponds to the maximum burst in a standard token bucket setup.

  • parameter in_flight_limit

    bounds the number of tokens that can be in flight. This corresponds to a running job limit/throttle.

  • parameter initial_hopper_level

    sets the number of tokens placed into the hopper when the Limiter is created.

  • parameter initial_bucket_level

    sets the number of tokens placed into the bucket when the Limiter is created. If this amount exceeds the bucket size it will be silently limited to bucket_limit.

    These tunables can be combined in several ways:

    • to produce a simple rate limiter, where the hopper is given an infinite number of tokens and clients simply take tokens as they are delivered to the bucket.
    • to produce a rate limiter that respects jobs that are more than instantaneous. In this case initial_hopper_level + initial_bucket_level should be bounded and clients hold tokens for the duration of their work.
    • to produce a throttle that doesn't limit the rate of jobs at all, but always keeps a max of n jobs running. In this case hopper_to_bucket_rate_per_sec should be infinite but in_flight_limit should be bounded to the upper job rate.

    In every case above, throttling and rate limiting combine nicely when the unit of work for both is the same (e.g., one token per message). If the unit of work is different (e.g., rate limit based on a number of tokens equal to message size, but throttle based on simple message count) then a single t probably cannot be used to get the correct behavior, and two instances should be used with tokens taken from both.

val tokens_may_be_available_when : t -> now:Core.Time_ns.t -> int -> Tokens_may_be_available_result.t

Returns the earliest time when the requested number of tokens could possibly be delivered. There is no guarantee that the requested number of tokens will actually be available at this time. You must call try_take to actually attempt to take the tokens.

val try_take : t -> now:Core.Time_ns.t -> int -> Try_take_result.t

Attempts to take the given number of tokens from the bucket. try_take t ~now n succeeds iff in_bucket t ~now >= n.

val return_to_hopper : t -> now:Core.Time_ns.t -> int -> unit

Returns the given number of tokens to the hopper. These tokens will fill the tokens available to try_take at the fill_rate. Note that if return is called on more tokens than have actually been removed, it can cause the number of concurrent jobs to exceed max_concurrent_jobs.

val try_return_to_bucket : t -> now:Core.Time_ns.t -> int -> Try_return_to_bucket_result.t

Returns the given number of tokens directly to the bucket. If the amount is negative, is more than is currently in flight, or if moving the amount would cause the bucket to surpass its bucket_limit, Unable is returned.

OCaml

Innovation. Community. Security.