package incr_map

  1. Overview
  2. Docs
type 'a t

type 'a t is the type of incrementals that have a value of type 'a.

Incrementals are not covariant, i.e. we do not have +'a t -- consider, e.g. set_cutoff and get_cutoff. However, if you have types a1 and a2 where a1 is a subtype of a2, and a value t1 : a1 t, then the following builds an incremental value of type a2 t:

let t2 : a2 t = t1 >>| fun a1 -> (a1 : a1 :> a2)
val sexp_of_t : ('a -> Ppx_sexp_conv_lib.Sexp.t) -> 'a t -> Ppx_sexp_conv_lib.Sexp.t
type 'a incremental = 'a t
include Core_kernel.Invariant.S1 with type 'a t := 'a t
val invariant : 'a Base__.Invariant_intf.inv -> 'a t Base__.Invariant_intf.inv
val is_const : _ t -> bool

If is_const t then t is a constant-valued incremental. is_const (const a) is true.

val is_valid : _ t -> bool
val is_necessary : _ t -> bool

Creating incrementals

val const : 'a -> 'a t

const v returns an incremental whose value never changes. It is the same as return, but reads more clearly in many situations because it serves as a nice reminder that the incremental won't change (except possibly be invalidated).

val return : 'a -> 'a t
val map : 'a t -> f:('a -> 'b) -> 'b t

map t1 ~f returns an incremental t that maintains its value as f a, where a is the value of t1. map2, map3, ..., map9 are the generalizations to more arguments. If you need map<N> for some N > 9, it can easily be added, but also see array_fold and unordered_array_fold.

f should not create incremental nodes but this behavior is not checked; if you want to create incremental nodes, use bind. The invalidation machinery that is used with bind is not used with map.

val (>>|) : 'a t -> ('a -> 'b) -> 'b t
val map2 : 'a1 t -> 'a2 t -> f:('a1 -> 'a2 -> 'b) -> 'b t
val map3 : 'a1 t -> 'a2 t -> 'a3 t -> f:('a1 -> 'a2 -> 'a3 -> 'b) -> 'b t
val map4 : 'a1 t -> 'a2 t -> 'a3 t -> 'a4 t -> f:('a1 -> 'a2 -> 'a3 -> 'a4 -> 'b) -> 'b t
val map5 : 'a1 t -> 'a2 t -> 'a3 t -> 'a4 t -> 'a5 t -> f:('a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'b) -> 'b t
val map6 : 'a1 t -> 'a2 t -> 'a3 t -> 'a4 t -> 'a5 t -> 'a6 t -> f:('a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'b) -> 'b t
val map7 : 'a1 t -> 'a2 t -> 'a3 t -> 'a4 t -> 'a5 t -> 'a6 t -> 'a7 t -> f:('a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'b) -> 'b t
val map8 : 'a1 t -> 'a2 t -> 'a3 t -> 'a4 t -> 'a5 t -> 'a6 t -> 'a7 t -> 'a8 t -> f:('a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'b) -> 'b t
val map9 : 'a1 t -> 'a2 t -> 'a3 t -> 'a4 t -> 'a5 t -> 'a6 t -> 'a7 t -> 'a8 t -> 'a9 t -> f:('a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'a9 -> 'b) -> 'b t
val bind : 'a t -> f:('a -> 'b t) -> 'b t

bind t1 ~f returns an incremental t2 that behaves like f v, where v is the value of t1. If t1's value changes, then incremental applies f to that new value and t2 behaves like the resulting incremental.

bind can be significantly more expensive than map during stabilization, because, when its left-hand side changes, it requires modification of the incremental DAG, while map simply flows values along the DAG. Thus it is preferable to use map (and its n-ary variants above) instead of bind unless one actually needs bind's power.

bind2 t1 t2 ~f is:

bind (map2 t1 t2 ~f:(fun v1 v2 -> (v1, v2)))
  ~f:(fun (v1, v2) -> f v1 v2)

This is equivalent to bind t1 ~f:(fun v1 -> bind t2 ~f:(fun v2 -> f v1 v2)) but more efficient due to using one bind node rather than two. The other bind<N> functions are generalize to more arguments.

val (>>=) : 'a t -> ('a -> 'b t) -> 'b t
val bind2 : 'a1 t -> 'a2 t -> f:('a1 -> 'a2 -> 'b t) -> 'b t
val bind3 : 'a1 t -> 'a2 t -> 'a3 t -> f:('a1 -> 'a2 -> 'a3 -> 'b t) -> 'b t
val bind4 : 'a1 t -> 'a2 t -> 'a3 t -> 'a4 t -> f:('a1 -> 'a2 -> 'a3 -> 'a4 -> 'b t) -> 'b t
module Infix : sig ... end
val join : 'a t t -> 'a t

join t returns an incremental that behaves like the incremental that t currently holds.

val if_ : bool t -> then_:'a t -> else_:'a t -> 'a t

if_ tb ~then_ ~else_ returns an incremental t that holds the value of then_ if tb is true, the value of else_ if tb is false. Note that t only depends on one of then_ or else_ at a time, i.e. if_ tb ~then_ ~else is like:

bind b ~f:(fun b -> if b then then_ else else_)

which is not the same as:

map3 b then_ else_ ~f:(fun b then_ else_ -> if b then then_ else else_)
val freeze : ?when_:('a -> bool) -> 'a t -> 'a t

freeze ?when_ t returns an incremental whose value is t's value v until the first stabilization in which when_ v holds, at which point the freeze node's value becomes constant and never changes again. Calling freeze t forces t to be necessary until it freezes regardless of whether the freeze node is necessary, but not thereafter (although of course t could remain necessary for other reasons). The result of freeze t, once frozen, will never be invalidated, even if t is invalidated, and even if the scope in which the freeze is created is invalidated. However, prior to when_ v becoming true, freeze t can be invalidated.

val depend_on : 'a t -> depend_on:_ t -> 'a t

depend_on input ~depend_on returns an output whose value is the same as input's value, such that depend_on is necessary so long as output is necessary. It is like:

map2 input depend_on ~f:(fun a _ -> a)

but with a cutoff such that output's value only changes when input's value changes.

val necessary_if_alive : 'a t -> 'a t

necessary_if_alive input returns output that has the same value and cutoff as input, such that as long as output is alive, input is necessary.

val for_all : bool t array -> bool t

for_all ts returns an incremental that is true iff all ts are true.

val exists : bool t array -> bool t

exists ts returns an incremental that is true iff at least one of the ts is true.

val all : 'a t list -> 'a list t

all ts returns an incremental whose value is a list of the values of all of the ts. In any stabilization where any of the ts changes, the entire list is recreated (once all of the ts have stabilized). This essentially an array_fold over the ts.

Array folds and sums

val array_fold : 'a t array -> init:'b -> f:('b -> 'a -> 'b) -> 'b t

array_fold ts ~init ~f creates an incremental t whose value is:

Array.fold ts ~init ~f:(fun ac t -> f ac (value t))

In a stabilization during which any of the ts changes, the entire fold will be computed once all of the ts have been computed.

val reduce_balanced : 'a t array -> f:('a -> 'b) -> reduce:('b -> 'b -> 'b) -> 'b t option

reduce_balanced ts ~f ~reduce does a fold-like operation over ts. Unlike array_fold, the operation will be computed in O(min(n, k * log(n))) time, where n is the size of ts and k is the number of elements of ts that have changed since the last stabilization.

Generally, if most or all of ts are changing between stabilizations, using array_fold will have better constant factors.

The reduce argument must be an associative operation: reduce a (reduce b c) = (reduce (reduce a b) c).

None is returned upon supplying an empty array.

val unordered_array_fold : ?full_compute_every_n_changes:int -> 'a t array -> init:'b -> f:('b -> 'a -> 'b) -> f_inverse:('b -> 'a -> 'b) -> 'b t

unordered_array_fold ts ~init ~f ~f_inverse folds over the ts. Unlike array_fold, the fold will be computed in time proportional to the number of ts that change rather than the number of ts. In a stabilization, for each t in ts that changes from old_value to new_value, the value of the unordered-array fold will change from b to f (f_inverse b old_value) new_value. The t's that change may take effect in any order.

If repeated changes might accumulate error, one can cause the fold to be fully computed after every full_compute_every_n_changes changes. If you do not supply full_compute_every_n_changes, then full computes will never happen after the initial one.

opt_unordered_array_fold is like unordered_array_fold, except that its result is Some iff all its inputs are Some.

val opt_unordered_array_fold : ?full_compute_every_n_changes:int -> 'a option t array -> init:'b -> f:('b -> 'a -> 'b) -> f_inverse:('b -> 'a -> 'b) -> 'b option t
val sum : ?full_compute_every_n_changes:int -> 'a t array -> zero:'a -> add:('a -> 'a -> 'a) -> sub:('a -> 'a -> 'a) -> 'a t

sum ts ~zero ~add ~sub ?full_compute_every_n_changes returns an incremental that maintains the sum of the ts. It uses unordered_array_fold so that the work required to maintain the sum is proportional to the number of ts that change (i.e. one sub and one add per change).

opt_sum is like sum, except that its result is Some iff all its inputs are Some.

val opt_sum : ?full_compute_every_n_changes:int -> 'a option t array -> zero:'a -> add:('a -> 'a -> 'a) -> sub:('a -> 'a -> 'a) -> 'a option t
val sum_int : int t array -> int t

sum_int ts = sum ts ~zero:0 ~add:(+) ~sub:(-)

val sum_float : float t array -> float t

sum_float ts is:

sum ts ~zero:0.0 ~add:(+.) ~sub:(-.)
  ~full_compute_every_n_changes:(Array.length ts)

This uses sum for fast update, with a full recompute every length ts changes to cut down on floating-point error.

Variables

module Var : sig ... end

Observers

module Observer : sig ... end

An observer lets one get the value of an incremental, either by asking directly for the value or installing an on-update handler to run when the incremental's value changes.

val observe : ?should_finalize:bool -> 'a t -> 'a Observer.t

observe t returns a new observer for t. observe raises if called during stabilization.

By default, an observer has a finalizer that calls disallow_future_use when the observer is no longer referenced. One can use ~should_finalize:false to cause the finalizer to not be created, in which case the observer will live until disallow_future_use is explicitly called.

module Update : sig ... end

on_update t ~f is similar to Observer.on_update_exn, but it does not cause t to be necessary. Instead of the Initialized update, there are updates for when a node becomes Necessary or Unnecessary. Here is a state diagram for the allowable sequences of Update.t's that can be supplied to a particular f:

val on_update : 'a t -> f:('a Update.t -> unit) -> unit

Stabilization

val stabilize : unit -> unit

stabilize () recomputes all incrementals that are necessary and stale. I.e. it propagates changes from variables that have been set to the necessary incrementals that depend on them, stopping propagation as per cutoffs.

val am_stabilizing : unit -> bool

Cutoffs

module Cutoff : sig ... end

An 'a Cutoff.t is a function that returns true if propagation of changes should be cutoff at a node based on the old value and the (possible) new value of the node.

val set_cutoff : 'a t -> 'a Cutoff.t -> unit

set_cutoff t cutoff replaces the current cutoff function for t with cutoff. cutoff will be called any time t is recomputed, with old_value being the value of t before the recomputation and new_value being the value that just recomputed. If cutoff ~old_value ~new_value, then t's value will remain as old_value (new_value is discarded) and anything depending on t will not be recomputed (at least not because of t). If not (cutoff ~old_value ~new_value), then t's value will become new_value, and all nodes depending on t will recomputed.

A reasonable choice for cutoff is an equality function on 'a.

The default cutoff for every node is phys_equal. For example, this means that a unit incremental would only fire once; to disable this, use set_cutoff t Cutoff.never.

val get_cutoff : 'a t -> 'a Cutoff.t
module Scope : sig ... end
val lazy_from_fun : (unit -> 'a) -> 'a Core_kernel.Lazy.t

lazy_from_fun f is like Lazy.from_fun f, except that the nodes created by f will be created in the scope in which lazy_from_fun was called, rather than in the scope of the piece of code that first forces the resulting lazy. Not using this function when defining lazy values is likely to result in exceptions being thrown by incremental. As a rule of thumb, all lazy e that might create incremental nodes should be replaced by lazy_from_fun (fun () -> e).

As usual with Lazy, if f raises, then that exception will be raised when calling Lazy.force.

val default_hash_table_initial_size : int

memoize_fun f hashable returns a function m that is a memoized version of f that will run f a on each distinct a that m is applied to, memoize the result (in a hash table), and thereafter for a, m will return the memoized result.

When m is called, it uses Scope.within to run f in the scope that was in effect when memoize_fun f was called. This is essential to correctly capture the dependence of nodes that f creates on values that f is closed over, which may in turn depend on the left-hand sides of binds in the scope in effect when memoize_fun f was called. Furthermore, nodes that f creates do not depend on the scope in effect when m is called.

memoize_fun_by_key is a generalization that allows one to memoize over values that contain a uniquely identifying key, but also have other data.

val memoize_fun : ?initial_size:int -> 'a Core_kernel.Hashtbl.Hashable.t -> ('a -> 'b) -> ('a -> 'b) Core_kernel.Staged.t
val memoize_fun_by_key : ?initial_size:int -> 'key Core_kernel.Hashtbl.Hashable.t -> ('a -> 'key) -> ('a -> 'b) -> ('a -> 'b) Core_kernel.Staged.t
val user_info : _ t -> Core_kernel.Info.t option

For debugging purposes, one can store an arbitrary Info.t in a node. This will be displayed as part of a node in error messages.

val set_user_info : _ t -> Core_kernel.Info.t option -> unit
module Expert : sig ... end

A low-level, experimental interface to incremental. This is useful when you need more control over the dependency graph, for performance reasons. It comes at the cost that it's much harder to use right. Specifically, here is what you can do with an expert node:

module State : sig ... end
module Packed : sig ... end
val pack : _ t -> Packed.t
val save_dot : string -> unit

save_dot file outputs to file the DAG of all necessary nodes, in dot format.

val keep_node_creation_backtrace : bool Core_kernel.ref

If keep_node_creation_backtrace, then whenever a new node is created, incremental will call Backtrace.get and store the result in the node. The backtrace will then appear in subsequent error messages when the node is pretty printed.

module Let_syntax : sig ... end

This Let_syntax allows you to write expressions like

OCaml

Innovation. Community. Security.