package neural_nets_lib
Install
dune-project
Dependency
Authors
Maintainers
Sources
md5=9170d4d98422350c9a73a95adfb795dc
sha512=c1b024a69b1d0338af6e34508dbf6dccf3c2b6cc156e7628c3d7853c7040e225bdfc0a8731bb4db5a97edba90e26439987bfa505154d23af46f119c07ad809ed
doc/neural_nets_lib/Ocannl/Shape/index.html
Module Ocannl.ShapeSource
Tensor shape types, shape inference, projection inference.
Labels specifications and einsum notation.
Definition and properties of the syntax of labels specifications and einsum notation:
- Whitespace-insensitive except that whitespace separates identifiers.
- Comes in two variants: single-character and multicharacter;
- if there is a comma
','anywhere in the initial text, the multicharacter version is used, - otherwise the single character version is used.
- Currently, the only non-whitespace, non-alphanumeric characters that make sense / are allowed in a spec are:
'>', '|', '-', ',', '=', ';'. - identifier: single alphanum character or '_' in single-char mode, a sequence of alphanum characters or '_' otherwise (whitespace not allowed).
- separators: a sequence of commas and whitespaces.
- separators_with_comma: commas and whitespaces containing at least one comma.
- axes_spec_single_char: separators? identifier+ separators?
- axes_spec_multichar: separators? (identifier separators_with_comma)* identifier separators?
- ellipsis_spec: '...' <|> '..' identifier '..'
- row_spec: axes_spec <|> ellipsis_spec axes_spec <|> axes_spec ellipsis_spec axes_spec
- labels_spec: row_spec <|> row_spec '|' row_spec <|> row_spec '->' row_spec <|> row_spec '|' row_spec '->' row_spec.
- permute_spec: labels_spec '=>' labels_spec
- einsum_spec: labels_spec ';' labels_spec '=>' labels_spec
If labels_spec does not contain "|" nor "->", each label is of the kind Output. If the spec doesn't contain "|", labels to the left of "->" are Input and to the right Output. Labels to the left of "|" are Batch, and between "|" and "->" are Input.
The labels ".."ident"..", "..." (where ident does not contain any of the special characters) are only allowed once for a kind. They are used to enable (in-the-middle) broadcasting for the axis kind in the einsum-related shape inference (like the ellipsis "..." in numpy.einsum), and are translated to row variables. The ellipsis "..." is context dependent: in the batch row it is the same as "..batch..", in the input row the same as "..input..", in the output row the same as "..output..". When the same row variable is used in multiple rows, the corresponding broadcasted axes are matched pointwise in the resulting operation.
The label "_" is a place-holder: it is not output to the resulting map but aligns the axes of other labels.
User-ish API.
type compose_type = | Pointwise_bin(*NumPy-style broadcast matching batch, input and output axes, e.g. as in
*)s1 + s2.| Compose(*Compose the outputs of the second shape with the inputs of the first shape, i.e. the shape of
*)fun x -> s1(s2(x)), ors1 * s2where*is the inner product (e.g. matrix multiply).| Einsum of Base.string(*The binary "einsum" syntax: RHS1;RHS2=>LHS, where RHSi, LHS are labels specifications. Since OCANNL's extended einsum notation supports both axis variables and row variables, it makes other compose types redundant. The
axis_labelsuse pseudo-labels local to the notation, to line up the axes and row variables. The symmetric difference / disjunctive union of RHS1 and RHS2's pseudo-labels should be equal to LHS pseudo-labels.Note: The "right-hand-side" is on the left! I.e. the syntax is "rhs=>lhs", "rhs1;rhs2=>lhs".
*)
type transpose_type = | Transpose(*Swaps inputs and outputs of a shape, preserves batch axes.
*)| Pointwise_un(*Preserves the shape.
*)| Permute of Base.string(*The unary "einsum" syntax: RHS1=>LHS.
*)| Batch_slice of Arrayjit.Indexing.static_symbol(*Removes the leftmost batch axis.
*)
val make :
?batch_dims:Base.int Base.list ->
?input_dims:Base.int Base.list ->
?output_dims:Base.int Base.list ->
?batch_axes:(Base.string * Base.int) Base.list ->
?input_axes:(Base.string * Base.int) Base.list ->
?output_axes:(Base.string * Base.int) Base.list ->
?deduced:deduce_within_shape ->
debug_name:Base.string ->
id:Base.int ->
Base.unit ->
tCreates a shape. id should be the id the associated tensor (if any). At most one of the pairs batch_dims, batch_axes etc. should be given: if none, the corresponding row will be inferred. batch_axes etc. provide labels for the dimensions of the corresponding axes. Note that these are dimensions labels and not axis labels: they need not be unique for a row, are inferred when provided, and must match whenever the axis sizes must match.
val to_string_hum :
?style:
[< `Axis_number_and_size
| `Axis_size
| `Only_labels Axis_size Only_labels ] ->
t ->
Base.stringInternal-ish API.
type logic = | Broadcast of compose_type * t * t(*Matches the shapes for a binary operation.
For
*)Broadcast (Einsum (ls1, ls2, ls3), s1, s2), the labels ofs1ands2must match according to thels1,ls2lineup, and the resulting shape inherits the labels according to thels3lineup.| Transpose of transpose_type * t(*Permutes the axes of a shape. One case of
*)Transposeis to swap inputs with outputs ofs1, hence the name.| Terminal of Arrayjit.Ops.init_op(*Extracts any available shape information from the initialization. E.g. for
*)File_mapped fn, opens the filefnto check its length.
How to propagate shape updates and do the last update of Tensor.t.shape when finalizing the tensor. Axes are broadcast-expanded on a bottom-up update to fit the incoming shape.
val hash_fold_update_id :
Ppx_hash_lib.Std.Hash.state ->
update_id ->
Ppx_hash_lib.Std.Hash.stateData required for a shape inference update step. Ideally, an update should be performed at least twice, the second time after all the other relevant updates have been performed for the first time. In OCANNL, this is achieved by performing updates both as the tensors are constructed, and via lazy callbacks as the corresponding Arrayjit.Indexing dimensions and projections are first accessed.
Computes the indexing into subtensors given the shape information of a tensor. derive_projections should only be invoked when the shapes are fully inferred already!
val backprop_ith_arg :
from_1:Base.int ->
Arrayjit.Indexing.projections ->
Arrayjit.Indexing.projectionsval of_spec :
?deduced:deduce_within_shape ->
debug_name:Base.string ->
id:Base.int ->
Base.string ->
t