package neural_nets_lib
sectionYPositions = computeSectionYPositions($el), 10)"
x-init="setTimeout(() => sectionYPositions = computeSectionYPositions($el), 10)"
>
A from-scratch Deep Learning framework with an optimizing compiler, shape inference, concise syntax
Install
dune-project
Dependency
Authors
Maintainers
Sources
0.3.3.3.tar.gz
md5=9170d4d98422350c9a73a95adfb795dc
sha512=c1b024a69b1d0338af6e34508dbf6dccf3c2b6cc156e7628c3d7853c7040e225bdfc0a8731bb4db5a97edba90e26439987bfa505154d23af46f119c07ad809ed
doc/neural_nets_lib/Ocannl/Operation/index.html
Module Ocannl.OperationSource
Computational primitives for neural networks, integrating Tensor with Assignments.
Source
val add :
?label:Base.string list ->
?grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.t ->
Tensor.tSource
val sub :
?label:Base.string list ->
?grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.t ->
Tensor.tSource
val mul :
Shape.compose_type ->
op_asn:
(v:Tensor.tn ->
t1:Tensor.t ->
t2:Tensor.t ->
projections:Tensor.projections Base.Lazy.t ->
Tensor.asgns) ->
label:Base.string Base.list ->
?grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.t ->
Tensor.tSource
val pointmul :
?label:Base.string list ->
?grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.t ->
Tensor.tSource
val matmul :
?label:Base.string list ->
?grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.t ->
Tensor.tSource
val einsum :
?label:Base.string list ->
Base.string ->
?grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.t ->
Tensor.tSimilar to the explicit mode of numpy.einsum, the binary variant. Can compute various forms of matrix multiplication, inner and outer products, etc.
Note that "a,b->c" from numpy is "a;b=>c" in OCANNL, since "->" is used to separate the input and the output axes.
Source
val outer_sum :
?label:Base.string list ->
Base.string ->
?grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.t ->
Tensor.tLike einsum, but adds instead than multiplying the resulting values.
Source
val einsum1 :
?label:Base.string list ->
Base.string ->
?grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.tSimilar to the explicit mode of numpy.einsum, the unary variant. Can permute axes, extract diagonals, compute traces etc.
Note that "a->c" from numpy is "a=>c" in OCANNL, since "->" is used to separate the input and the output axes.
Source
val pointpow :
?label:Base.string Base.list ->
grad_spec:Tensor.grad_spec ->
Base.float ->
Tensor.t ->
Tensor.tSource
val pointdiv :
?label:Base.string Base.list ->
grad_spec:Tensor.grad_spec ->
Tensor.t ->
Tensor.t ->
Tensor.tSource
val range :
?label:Base.string list ->
?grad_spec:Tensor.grad_spec ->
?axis_label:Base.string ->
Base.Int.t ->
Tensor.tSource
val range_of_shape :
?label:Base.string list ->
?grad_spec:Tensor.grad_spec ->
?batch_dims:Base.Int.t Base.List.t ->
?input_dims:Base.Int.t Base.List.t ->
?output_dims:Base.Int.t Base.List.t ->
?batch_axes:(Base.string * Base.Int.t) Base.List.t ->
?input_axes:(Base.string * Base.Int.t) Base.List.t ->
?output_axes:(Base.string * Base.Int.t) Base.List.t ->
unit ->
Tensor.tA stop_gradient is an identity in the forward pass and a no-op in the backprop pass.
Source
val slice :
?label:Base.string list ->
grad_spec:Tensor.grad_spec ->
Idx.static_symbol ->
Tensor.t ->
Tensor.t sectionYPositions = computeSectionYPositions($el), 10)"
x-init="setTimeout(() => sectionYPositions = computeSectionYPositions($el), 10)"
>