#### incremental

Library for incremental computations
IN THIS PACKAGE
`module State : sig ... end`

A `State.t` holds shared state used by all the incremental functions.

`type ('a, 'w) t`

`type ('a,'w) t` is the type of incrementals that have a value of type `'a`, with a state witness of type `'w`.

Incrementals are not covariant, i.e. we do not have `(+'a, _) t` -- consider, e.g. `set_cutoff` and `get_cutoff`. However, if you have types `a1` and `a2` where `a1` is a subtype of `a2`, and a value `t1 : a1 t`, then the following builds an incremental value of type `a2 t`:

``let t2 : a2 t = t1 >>| fun a1 -> (a1 : a1 :> a2)``
```val sexp_of_t : ( 'a -> Sexplib0.Sexp.t ) -> ( 'w -> Sexplib0.Sexp.t ) -> ( 'a, 'w ) t -> Sexplib0.Sexp.t```
`type ('a, 'w) incremental := ( 'a, 'w ) t`
`include Core.Invariant.S2 with type ('a, 'w) t := ( 'a, 'w ) t`
`val invariant : ( 'a -> unit ) -> ( 'b -> unit ) -> ( 'a, 'b ) t -> unit`
`val state : ( _, 'w ) t -> 'w State.t`
`val is_const : ( _, _ ) t -> bool`

If `is_const t` then `t` is a constant-valued incremental. `is_const (const a)` is true.

`val is_valid : ( _, _ ) t -> bool`
`val is_necessary : ( _, _ ) t -> bool`

## Creating incrementals

`val const : 'w State.t -> 'a -> ( 'a, 'w ) t`

`const state a` returns an incremental whose value never changes. It is the same as `return`, but reads more clearly in many situations because it serves as a nice reminder that the incremental won't change (except possibly be invalidated).

`val return : 'w State.t -> 'a -> ( 'a, 'w ) t`
`val map : ( 'a, 'w ) t -> f:( 'a -> 'b ) -> ( 'b, 'w ) t`

`map t1 ~f` returns an incremental `t` that maintains its value as `f a`, where `a` is the value of `t1`. `map2`, `map3`, ..., `map9` are the generalizations to more arguments. If you need map<N> for some N > 9, it can easily be added, but also see `array_fold` and `unordered_array_fold`.

`f` should not create incremental nodes but this behavior is not checked; if you want to create incremental nodes, use `bind`. The invalidation machinery that is used with `bind` is not used with `map`.

`val (>>|) : ( 'a, 'w ) t -> ( 'a -> 'b ) -> ( 'b, 'w ) t`
```val map2 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> f:( 'a1 -> 'a2 -> 'b ) -> ( 'b, 'w ) t```
```val map3 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> 'b ) -> ( 'b, 'w ) t```
```val map4 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'b ) -> ( 'b, 'w ) t```
```val map5 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'b ) -> ( 'b, 'w ) t```
```val map6 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'b ) -> ( 'b, 'w ) t```
```val map7 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'b ) -> ( 'b, 'w ) t```
```val map8 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> ( 'a8, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'b ) -> ( 'b, 'w ) t```
```val map9 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> ( 'a8, 'w ) t -> ( 'a9, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'a9 -> 'b ) -> ( 'b, 'w ) t```
```val map10 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> ( 'a8, 'w ) t -> ( 'a9, 'w ) t -> ( 'a10, 'w ) t -> f: ( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'a9 -> 'a10 -> 'b ) -> ( 'b, 'w ) t```
```val map11 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> ( 'a8, 'w ) t -> ( 'a9, 'w ) t -> ( 'a10, 'w ) t -> ( 'a11, 'w ) t -> f: ( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'a9 -> 'a10 -> 'a11 -> 'b ) -> ( 'b, 'w ) t```
```val map12 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> ( 'a8, 'w ) t -> ( 'a9, 'w ) t -> ( 'a10, 'w ) t -> ( 'a11, 'w ) t -> ( 'a12, 'w ) t -> f: ( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'a9 -> 'a10 -> 'a11 -> 'a12 -> 'b ) -> ( 'b, 'w ) t```
```val map13 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> ( 'a8, 'w ) t -> ( 'a9, 'w ) t -> ( 'a10, 'w ) t -> ( 'a11, 'w ) t -> ( 'a12, 'w ) t -> ( 'a13, 'w ) t -> f: ( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'a9 -> 'a10 -> 'a11 -> 'a12 -> 'a13 -> 'b ) -> ( 'b, 'w ) t```
```val map14 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> ( 'a8, 'w ) t -> ( 'a9, 'w ) t -> ( 'a10, 'w ) t -> ( 'a11, 'w ) t -> ( 'a12, 'w ) t -> ( 'a13, 'w ) t -> ( 'a14, 'w ) t -> f: ( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'a9 -> 'a10 -> 'a11 -> 'a12 -> 'a13 -> 'a14 -> 'b ) -> ( 'b, 'w ) t```
```val map15 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> ( 'a5, 'w ) t -> ( 'a6, 'w ) t -> ( 'a7, 'w ) t -> ( 'a8, 'w ) t -> ( 'a9, 'w ) t -> ( 'a10, 'w ) t -> ( 'a11, 'w ) t -> ( 'a12, 'w ) t -> ( 'a13, 'w ) t -> ( 'a14, 'w ) t -> ( 'a15, 'w ) t -> f: ( 'a1 -> 'a2 -> 'a3 -> 'a4 -> 'a5 -> 'a6 -> 'a7 -> 'a8 -> 'a9 -> 'a10 -> 'a11 -> 'a12 -> 'a13 -> 'a14 -> 'a15 -> 'b ) -> ( 'b, 'w ) t```
`val bind : ( 'a, 'w ) t -> f:( 'a -> ( 'b, 'w ) t ) -> ( 'b, 'w ) t`

`bind t1 ~f` returns an incremental `t2` that behaves like `f v`, where `v` is the value of `t1`. If `t1`'s value changes, then incremental applies `f` to that new value and `t2` behaves like the resulting incremental.

`bind` can be significantly more expensive than `map` during stabilization, because, when its left-hand side changes, it requires modification of the incremental DAG, while `map` simply flows values along the DAG. Thus it is preferable to use `map` (and its n-ary variants above) instead of `bind` unless one actually needs `bind`'s power.

`bind2 t1 t2 ~f` is:

``````bind (map2 t1 t2 ~f:(fun v1 v2 -> (v1, v2)))
~f:(fun (v1, v2) -> f v1 v2)``````

This is equivalent to `bind t1 ~f:(fun v1 -> bind t2 ~f:(fun v2 -> f v1 v2))` but more efficient due to using one bind node rather than two. The other `bind<N>` functions are generalize to more arguments.

`val (>>=) : ( 'a, 'w ) t -> ( 'a -> ( 'b, 'w ) t ) -> ( 'b, 'w ) t`
```val bind2 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> f:( 'a1 -> 'a2 -> ( 'b, 'w ) t ) -> ( 'b, 'w ) t```
```val bind3 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> ( 'b, 'w ) t ) -> ( 'b, 'w ) t```
```val bind4 : ( 'a1, 'w ) t -> ( 'a2, 'w ) t -> ( 'a3, 'w ) t -> ( 'a4, 'w ) t -> f:( 'a1 -> 'a2 -> 'a3 -> 'a4 -> ( 'b, 'w ) t ) -> ( 'b, 'w ) t```
`module Infix : sig ... end`
`val join : ( ( 'a, 'w ) t, 'w ) t -> ( 'a, 'w ) t`

`join t` returns an incremental that behaves like the incremental that `t` currently holds.

```val if_ : ( bool, 'w ) t -> then_:( 'a, 'w ) t -> else_:( 'a, 'w ) t -> ( 'a, 'w ) t```

`if_ tb ~then_ ~else_` returns an incremental `t` that holds the value of `then_` if `tb` is true, the value of `else_` if `tb` is false. Note that `t` only depends on one of `then_` or `else_` at a time, i.e. `if_ tb ~then_ ~else` is like:

``bind b ~f:(fun b -> if b then then_ else else_)``

which is not the same as:

``map3 b then_ else_ ~f:(fun b then_ else_ -> if b then then_ else else_)``
`val freeze : ?when_:( 'a -> bool ) -> ( 'a, 'w ) t -> ( 'a, 'w ) t`

`freeze ?when_ t` returns an incremental whose value is `t`'s value `v` until the first stabilization in which `when_ v` holds, at which point the freeze node's value becomes constant and never changes again. Calling `freeze t` forces `t` to be necessary until it freezes regardless of whether the freeze node is necessary, but not thereafter (although of course `t` could remain necessary for other reasons). The result of `freeze t`, once frozen, will never be invalidated, even if `t` is invalidated, and even if the scope in which the freeze is created is invalidated. However, prior to `when_ v` becoming true, `freeze t` can be invalidated.

`val depend_on : ( 'a, 'w ) t -> depend_on:( _, 'w ) t -> ( 'a, 'w ) t`

`depend_on input ~depend_on` returns an `output` whose value is the same as `input`'s value, such that `depend_on` is necessary so long as `output` is necessary. It is like:

``map2 input depend_on ~f:(fun a _ -> a)``

but with a cutoff such that `output`'s value only changes when `input`'s value changes.

`val necessary_if_alive : ( 'a, 'w ) t -> ( 'a, 'w ) t`

`necessary_if_alive input` returns `output` that has the same value and cutoff as `input`, such that as long as `output` is alive, `input` is necessary.

`val for_all : 'w State.t -> ( bool, 'w ) t array -> ( bool, 'w ) t`

`for_all ts` returns an incremental that is `true` iff all `ts` are `true`.

`val exists : 'w State.t -> ( bool, 'w ) t array -> ( bool, 'w ) t`

`exists ts` returns an incremental that is `true` iff at least one of the `ts` is `true`.

`val all : 'w State.t -> ( 'a, 'w ) t list -> ( 'a list, 'w ) t`

`all ts` returns an incremental whose value is a list of the values of all of the `ts`. In any stabilization where any of the `ts` changes, the entire list is recreated (once all of the `ts` have stabilized). This essentially an `array_fold` over the `ts`.

`val both : ( 'a, 'w ) t -> ( 'b, 'w ) t -> ( 'a * 'b, 'w ) t`

`both t1 t2` returns an incremental whose value is pair of values of `t1` and `t2`. Both `map (both t1 t2) ~f` and `map2 t1 t2 ~f:(fun a1 a2 -> f (a1, a2))` return an incremental with the same behavior, but the `map2` version is more efficient, because it creates a single node, whereas the `both` version creates two nodes.

## Array folds and sums

```val array_fold : 'w State.t -> ( 'a, 'w ) t array -> init:'b -> f:( 'b -> 'a -> 'b ) -> ( 'b, 'w ) t```

`array_fold ts ~init ~f` creates an incremental `t` whose value is:

``Array.fold ts ~init ~f:(fun ac t -> f ac (value t))``

In a stabilization during which any of the `ts` changes, the entire fold will be computed once all of the `ts` have been computed.

```val reduce_balanced : 'w State.t -> ( 'a, 'w ) t array -> f:( 'a -> 'b ) -> reduce:( 'b -> 'b -> 'b ) -> ( 'b, 'w ) t option```

`reduce_balanced ts ~f ~reduce` does a fold-like operation over `ts`. Unlike `array_fold`, the operation will be computed in `O(min(n, k * log(n)))` time, where `n` is the size of `ts` and `k` is the number of elements of `ts` that have changed since the last stabilization.

Generally, if most or all of `ts` are changing between stabilizations, using `array_fold` will have better constant factors.

The `reduce` argument must be an associative operation: `reduce a (reduce b c) = (reduce (reduce a b) c)`.

`None` is returned upon supplying an empty array.

`module Unordered_array_fold_update : sig ... end`
```val unordered_array_fold : 'w State.t -> ?full_compute_every_n_changes:int -> ( 'a, 'w ) t array -> init:'b -> f:( 'b -> 'a -> 'b ) -> update:( 'a, 'b ) Unordered_array_fold_update.t -> ( 'b, 'w ) t```

`unordered_array_fold ts ~init ~f ~update` folds over the `ts`. Unlike `array_fold`, the fold will be computed in time proportional to the number of `ts` that change rather than the number of `ts`. In a stabilization, for each `t` in `ts` that changes from `old_value` to `new_value`, the value of the unordered-array fold, `b`, will change depending on `update`:

• `F_inverse f_inverse`: from `b` to `f (f_inverse b old_value) new_value`
• `Update update`: from `b` to `update b ~old_value ~new_value`

The `t`'s that change may take effect in any order.

If repeated changes might accumulate error, one can cause the fold to be fully computed after every `full_compute_every_n_changes` changes. If you do not supply `full_compute_every_n_changes`, then full computes will never happen after the initial one.

```val opt_unordered_array_fold : 'w State.t -> ?full_compute_every_n_changes:int -> ( 'a option, 'w ) t array -> init:'b -> f:( 'b -> 'a -> 'b ) -> f_inverse:( 'b -> 'a -> 'b ) -> ( 'b option, 'w ) t```

`opt_unordered_array_fold` is like `unordered_array_fold`, except that its result is `Some` iff all its inputs are `Some`.

```val sum : 'w State.t -> ?full_compute_every_n_changes:int -> ( 'a, 'w ) t array -> zero:'a -> add:( 'a -> 'a -> 'a ) -> sub:( 'a -> 'a -> 'a ) -> ( 'a, 'w ) t```

`sum ts ~zero ~add ~sub ?full_compute_every_n_changes` returns an incremental that maintains the sum of the `ts`. It uses `unordered_array_fold` so that the work required to maintain the sum is proportional to the number of `ts` that change (i.e. one `sub` and one `add` per change).

`opt_sum` is like `sum`, except that its result is `Some` iff all its inputs are `Some`.

```val opt_sum : 'w State.t -> ?full_compute_every_n_changes:int -> ( 'a option, 'w ) t array -> zero:'a -> add:( 'a -> 'a -> 'a ) -> sub:( 'a -> 'a -> 'a ) -> ( 'a option, 'w ) t```
`val sum_int : 'w State.t -> ( int, 'w ) t array -> ( int, 'w ) t`

`sum_int ts` = `sum ts ~zero:0 ~add:(+) ~sub:(-)`

`val sum_float : 'w State.t -> ( float, 'w ) t array -> ( float, 'w ) t`

`sum_float ts` is:

``````sum ts ~zero:0.0 ~add:(+.) ~sub:(-.)
~full_compute_every_n_changes:(Array.length ts)``````

This uses `sum` for fast update, with a full recompute every `length ts` changes to cut down on floating-point error.

`module Scope : sig ... end`

The stack of bind left-hand sides currently in effect is the current "scope". In order to create a function in one scope and apply it in a different scope, one must manually save and restore the scope. Essentially, the scope should be part of every closure that constructs incrementals. For example:

`module Var : sig ... end`
`module Observer : sig ... end`
`val observe : ?should_finalize:bool -> ( 'a, 'w ) t -> ( 'a, 'w ) Observer.t`

`observe t` returns a new observer for `t`. `observe` raises if called during stabilization.

By default, an observer has a finalizer that calls `disallow_future_use` when the observer is no longer referenced. One can use `~should_finalize:false` to cause the finalizer to not be created, in which case the observer will live until `disallow_future_use` is explicitly called.

`module Update : sig ... end`
`val on_update : ( 'a, _ ) t -> f:( 'a Update.t -> unit ) -> unit`

`on_update t ~f` is similar to `Observer.on_update_exn`, but it does not cause `t` to be necessary. Instead of the `Initialized` update, there are updates for when a node becomes `Necessary` or `Unnecessary`. Here is a state diagram for the allowable sequences of `Update.t`'s that can be supplied to a particular `f`:

```        /-----------------------------------------------------\
|                 /                                   |
|                 |                                   v
Start ------> Necessary ----------> Changed ------> Invalidated
|                | ^             |  ^  |              ^
|                | |   /---------/  \--/              |
|                v |   v                              |
\----------> Unnecessary -----------------------------/```

If `t` gets a new value during a stabilization but is unnecessary at the end of it, `f` will _not_ be called with `Changed`, but with `Unnecessary` if allowed by the transition diagram. I.e. if the prior call to `f` was with `Necessary` or `Changed`, `f` will be called with `Unnecessary`. If the prior call to `f` was with `Invalidated` or `Unnecessary`, then `f` will not be called.

One should typically use `Observer.on_update_exn`, unless the `Unnecessary` updates are needed.

## Stabilization

`val stabilize : _ State.t -> unit`

`stabilize ()` recomputes all incrementals that are necessary and stale. I.e. it propagates changes from variables that have been set to the necessary incrementals that depend on them, stopping propagation as per cutoffs.

`val am_stabilizing : _ State.t -> bool`

## Cutoffs

`module Cutoff : sig ... end`

An `'a Cutoff.t` is a function that returns `true` if propagation of changes should be cutoff at a node based on the old value and the (possible) new value of the node.

`val set_cutoff : ( 'a, _ ) t -> 'a Cutoff.t -> unit`

`set_cutoff t cutoff` replaces the current cutoff function for `t` with `cutoff`. `cutoff` will be called any time `t` is recomputed, with `old_value` being the value of `t` before the recomputation and `new_value` being the value that just recomputed. If `cutoff ~old_value ~new_value`, then `t`'s value will remain as `old_value` (`new_value` is discarded) and anything depending on `t` will not be recomputed (at least not because of `t`). If `not (cutoff ~old_value ~new_value)`, then `t`'s value will become `new_value`, and all nodes depending on `t` will recomputed.

A reasonable choice for `cutoff` is an equality function on `'a`.

The default cutoff for every node is `phys_equal`. For example, this means that a `unit incremental` would only fire once; to disable this, use ```set_cutoff t Cutoff.never```.

`val get_cutoff : ( 'a, _ ) t -> 'a Cutoff.t`
`val lazy_from_fun : _ State.t -> ( unit -> 'a ) -> 'a Core.Lazy.t`

`lazy_from_fun f` is like `Lazy.from_fun f`, except that the nodes created by `f` will be created in the scope in which `lazy_from_fun` was called, rather than in the scope of the piece of code that first forces the resulting lazy. Not using this function when defining lazy values is likely to result in exceptions being thrown by incremental. As a rule of thumb, all `lazy e` that might create incremental nodes should be replaced by `lazy_from_fun (fun () -> e)`.

As usual with `Lazy`, if `f` raises, then that exception will be raised when calling `Lazy.force`.

`val default_hash_table_initial_size : int`
```val memoize_fun : ?initial_size:int -> _ State.t -> 'a Base.Hashtbl.Key.t -> ( 'a -> 'b ) -> ( 'a -> 'b ) Core.Staged.t```

`memoize_fun f hashable` returns a function `m` that is a memoized version of `f` that will run `f a` on each distinct `a` that `m` is applied to, memoize the result (in a hash table), and thereafter for `a`, `m` will return the memoized result.

When `m` is called, it uses `Scope.within` to run `f` in the scope that was in effect when `memoize_fun f` was called. This is essential to correctly capture the dependence of nodes that `f` creates on values that `f` is closed over, which may in turn depend on the left-hand sides of binds in the scope in effect when ```memoize_fun f``` was called. Furthermore, nodes that `f` creates do not depend on the scope in effect when `m` is called.

`memoize_fun_by_key` is a generalization that allows one to memoize over values that contain a uniquely identifying key, but also have other data.

```val memoize_fun_by_key : ?initial_size:int -> _ State.t -> 'key Base.Hashtbl.Key.t -> ( 'a -> 'key ) -> ( 'a -> 'b ) -> ( 'a -> 'b ) Core.Staged.t```
```val weak_memoize_fun : ?initial_size:int -> _ State.t -> 'a Base.Hashtbl.Key.t -> ( 'a -> 'b Core.Heap_block.t ) -> ( 'a -> 'b Core.Heap_block.t ) Core.Staged.t```

The weak versions of the memoization functions use a `Weak_hashtbl` for the memo table. This keeps a weak pointer to each result, and so the garbage collector automatically removes unused results. Furthermore, `stabilize` removes the table entries whose result is unused.

```val weak_memoize_fun_by_key : ?initial_size:int -> _ State.t -> 'key Base.Hashtbl.Key.t -> ( 'a -> 'key ) -> ( 'a -> 'b Core.Heap_block.t ) -> ( 'a -> 'b Core.Heap_block.t ) Core.Staged.t```
`val user_info : ( _, _ ) t -> Core.Info.t option`

For debugging purposes, one can store an arbitrary `Info.t` in a node. This will be displayed as part of a node in error messages.

`val set_user_info : ( _, _ ) t -> Core.Info.t option -> unit`
```val append_user_info_graphviz : ( _, _ ) t -> label:string list -> attrs:string Core.String.Map.t -> unit```

`append_user_info_graphviz` also modifies the `user_info` field on the node, but in a way that allows the caller to set the label and the properties on the node when a graphviz graph is produced by calling `save_dot` or `save_dot_to_file`.

`module Node_value : sig ... end`
`val node_value : ( 'a, _ ) t -> 'a Node_value.t`
`module Packed : sig ... end`
`val pack : ( _, _ ) t -> Packed.t`
`val save_dot : _ State.t -> Core.Out_channel.t -> unit`

`save_dot file` outputs to `file` the DAG of all necessary nodes, in dot format.

`val save_dot_to_file : _ State.t -> string -> unit`
`module Let_syntax : sig ... end`

This `Let_syntax` allows you to write expressions like

`module Before_or_after : sig ... end`
`module Step_function = Incremental_step_function`
`module Clock : sig ... end`

Incremental has a timing-wheel-based clock, and lets one build incremental values that change as its time passes. One must explicitly call `advance_clock` to change incremental's clock; there is no implicit call based on the passage of time.

`module Expert : sig ... end`

A low-level, experimental interface to incremental. This is useful when you need more control over the dependency graph, for performance reasons. It comes at the cost that it's much harder to use right. Specifically, here is what you can do with an expert node:

`module type S_gen = sig ... end`
`module type S = sig ... end`
`module Make () : S`

`Make` returns a new incremental implementation. `Make` uses ```Config.Default ()```.

`module Config : sig ... end`
`module type Incremental_config = Config.Incremental_config`
`module Make_with_config (C : Incremental_config) () : S`
`module Private : sig ... end`