package torch
Install
dune-project
Dependency
Authors
Maintainers
Sources
md5=8746b973b125b378d4f5f44800d1a0a6
sha512=86887d16a40f366e16a95f38863c4afd08ceb426b7a7341802451210739f5213322f0d10bb2f979adb26296c4fbee699a3ab5a1d264334f20c44bc05d6f8540e
doc/torch/Torch/Layer/index.html
Module Torch.LayerSource
Layer Types and Conversions
A layer takes as input a tensor and returns a tensor through the forward function. Layers can hold variables, these are created and registered using a Var_store.t when creating the layer.
A layer of type t_with_training is similar to a layer of type t except that it is also given a boolean argument when applying it to a tensor that specifies whether the layer is currently used in training or in testing mode. This is typically the case for batch normalization or dropout.
with_training t returns a layer using the is_training argument from a standard layer. The is_training argument is discarded. This is useful when sequencing multiple layers via fold.
Basic Layer Creation
The identity layer with an is_training argument.
of_fn f creates a layer based on a function from tensors to tensors.
of_fn_ f creates a layer based on a function from tensors to tensors. f also has access to the is_training flag.
sequential_ ts applies sequentially a list of layers ts.
forward_ t tensor ~is_training applies layer t to tensor with the specified is_training flag.
Linear and Convolution Layers
The different kind of activations supported by the various layers below.
val linear :
Var_store.t ->
?activation:activation ->
?use_bias:Base.bool ->
?w_init:Var_store.Init.t ->
input_dim:Base.int ->
Base.int ->
tlinear vs ~input_dim output_dim returns a linear layer. When using forward, the input tensor has to use a shape of batch_size * input_dim. The returned tensor has a shape batch_size * output_dim.
val conv2d :
Var_store.t ->
ksize:(Base.int * Base.int) ->
stride:(Base.int * Base.int) ->
?activation:activation ->
?use_bias:Base.bool ->
?w_init:Var_store.Init.t ->
?padding:(Base.int * Base.int) ->
?groups:Base.int ->
input_dim:Base.int ->
Base.int ->
tconv2d vs ~ksize ~stride ~input_dim output_dim returns a 2 dimension convolution layer. ksize specifies the kernel size and stride the stride. When using forward, the input tensor should have a shape batch_size * input_dim * h * w and the returned tensor will have a shape batch_size * output_dim * h' * w'.
val conv2d_ :
Var_store.t ->
ksize:Base.int ->
stride:Base.int ->
?activation:activation ->
?use_bias:Base.bool ->
?w_init:Var_store.Init.t ->
?padding:Base.int ->
?groups:Base.int ->
input_dim:Base.int ->
Base.int ->
tconv2d_ is similar to conv2d but uses the same kernel size, stride, and padding on both the height and width dimensions, so a single integer needs to be specified for these parameters.
val conv_transpose2d :
Var_store.t ->
ksize:(Base.int * Base.int) ->
stride:(Base.int * Base.int) ->
?activation:activation ->
?use_bias:Base.bool ->
?w_init:Var_store.Init.t ->
?padding:(Base.int * Base.int) ->
?output_padding:(Base.int * Base.int) ->
?groups:Base.int ->
input_dim:Base.int ->
Base.int ->
tconv_transpose2d creates a 2D transposed convolution layer, this is sometimes also called 'deconvolution'.
val conv_transpose2d_ :
Var_store.t ->
ksize:Base.int ->
stride:Base.int ->
?activation:activation ->
?use_bias:Base.bool ->
?w_init:Var_store.Init.t ->
?padding:Base.int ->
?output_padding:Base.int ->
?groups:Base.int ->
input_dim:Base.int ->
Base.int ->
tconv_transpose2d_ is similar to conv_transpose2d but uses a single value for the height and width dimension for the kernel size, stride, padding and output padding.
Normalization
val batch_norm2d :
Var_store.t ->
?w_init:Var_store.Init.t ->
?cudnn_enabled:Base.bool ->
?eps:Base.float ->
?momentum:Base.float ->
Base.int ->
t_with_trainingbatch_norm2d vs dim creates a batch norm 2D layer. This layer applies Batch Normalization over a 4D input batch_size * dim * h * w. The returned tensor has the same shape.