Page
Library
Module
Module type
Parameter
Class
Class type
Source
OCANNL is sponsored by Ahrefs! Visit the Ahrefs website.
The long-term goal is to provide several "low-level" backends, aiming to seek inspiration from projects such as TinyGrad, TVM, Luminal.
for
loops.The library users can compile any amount of code into a monolithic routine. Depending on the use case:
Tensor axes are split into kinds: batch, input and output. Tensor dimensions have optional labels.
OCANNL has full support for the einsum
notation, integrated with shape inference. Supports static indexing, with a built-in operation to take a slice of the batch axes, integrated with shape inference. Extensible to more static indexing patterns as needs arise.
OCANNL offers two main levels of abstraction.
The support for mixed-precision computations is upcoming.
It's a feature, not a bug!
2*.m
or m*.2
.m
by a constant number, e.g. m*2
, broadcasts the number to the shape of the input axes of the tensor. This results in an output-axes-only tensor (multi-axis-vector) that is the scaled sum over the input axes of the tensor m
.m
, e.g. 2*m
, broadcasts the number to the shape of the output axes of the tensor. This results in a tensor whose inputs are of the same shape as the inputs of m
, and the output shape is 1D (scalar), that is the scaled sum over the output axes of the tensor m
.On the critical path for the next major release v0.4:
For more details, see CHANGES.
v0.3 shape inference, jitted routines: a major rewrite of the whole project.
v0.2 inching toward GPU:
v0.1 GCCJIT backend:
Gccjit
backend, single and double precision floats, code compiled as a monolithic update step function.OCANNL follows different design choices than OWL. For example:
Some aspects are more centralized in OCANNL than in OWL and form the "infrastructure":
Tensor
implements "putting pieces together".Train
has the optimization "frontend" and utilities.arrayjit
, which may one day become a standalone library: generates the code, performs backend-agnostic optimizations (virtual nodes whose computation is inlined), implements the backends.Some aspects that are more core to OWL are less encapsulated in OCANNL, so it should be more natural to extend them.
Although the project is called ocannl
, the main package is called neural_nets_lib
, to avoid the (opam linter's) complaint that the name can be confused with other packages. This also clarifies that ocannl
is composed of arrayjit
and neural_nets_lib
.
The dependency on ocaml-cudajit
is optional, so you have to install it first to enable the Cuda backend.
After you get some basic grasp of the aims and design of the project by reading files in test/ and bin/, you can improve your understanding by reading lib/shape.mli, lib/tensor.mli, lib/operation.ml and lib/train.ml.