lasagne.layers
¶
get_output |
Computes the output of the network at one or more given layers. |
get_output_shape |
Computes the output shape of the network at one or more given layers. |
get_all_layers |
This function gathers all layers below one or more given Layer instances, including the given layer(s). |
get_all_params |
Returns a list of Theano shared variables or expressions that parameterize the layer. |
count_params |
This function counts all parameters (i.e., the number of scalar values) of all layers below one or more given Layer instances, including the layer(s) itself. |
get_all_param_values |
This function returns the values of the parameters of all layers below one or more given Layer instances, including the layer(s) itself. |
set_all_param_values |
Given a list of numpy arrays, this function sets the parameters of all layers below one or more given Layer instances (including the layer(s) itself) to the given values. |
Layer |
The Layer class represents a single layer of a neural network. |
MergeLayer |
This class represents a layer that aggregates input from multiple layers. |
InputLayer |
This layer holds a symbolic variable that represents a network input. |
DenseLayer |
A fully connected layer. |
NINLayer |
Network-in-network layer. |
Conv1DLayer |
1D convolutional layer |
Conv2DLayer |
2D convolutional layer |
Conv3DLayer |
3D convolutional layer |
TransposedConv2DLayer |
2D transposed convolution layer |
Deconv2DLayer |
alias of lasagne.layers.conv.TransposedConv2DLayer |
DilatedConv2DLayer |
2D dilated convolution layer |
LocallyConnected2DLayer |
2D locally connected layer |
MaxPool1DLayer |
1D max-pooling layer |
MaxPool2DLayer |
2D max-pooling layer |
MaxPool3DLayer |
3D max-pooling layer |
Pool1DLayer |
1D pooling layer |
Pool2DLayer |
2D pooling layer |
Pool3DLayer |
3D pooling layer |
Upscale1DLayer |
1D upscaling layer |
Upscale2DLayer |
2D upscaling layer |
Upscale3DLayer |
3D upscaling layer |
GlobalPoolLayer |
Global pooling layer |
FeaturePoolLayer |
Feature pooling layer |
FeatureWTALayer |
‘Winner Take All’ layer |
SpatialPyramidPoolingLayer |
Spatial Pyramid Pooling Layer |
CustomRecurrentLayer |
A layer which implements a recurrent connection. |
RecurrentLayer |
Dense recurrent neural network (RNN) layer |
LSTMLayer |
A long short-term memory (LSTM) layer. |
GRULayer |
Gated Recurrent Unit (GRU) Layer |
Gate |
Simple class to hold the parameters for a gate connection. |
DropoutLayer |
Dropout layer |
dropout |
alias of lasagne.layers.noise.DropoutLayer |
dropout_channels |
Convenience function to drop full channels of feature maps. |
spatial_dropout |
Convenience function to drop full channels of feature maps. |
dropout_locations |
Convenience function to drop full locations of feature maps. |
GaussianNoiseLayer |
Gaussian noise layer. |
ReshapeLayer |
A layer reshaping its input tensor to another tensor of the same total number of elements. |
reshape |
alias of lasagne.layers.shape.ReshapeLayer |
FlattenLayer |
A layer that flattens its input. |
flatten |
alias of lasagne.layers.shape.FlattenLayer |
DimshuffleLayer |
A layer that rearranges the dimension of its input tensor, maintaining the same same total number of elements. |
dimshuffle |
alias of lasagne.layers.shape.DimshuffleLayer |
PadLayer |
Pad all dimensions except the first batch_ndim with width zeros on both sides, or with another value specified in val . |
pad |
alias of lasagne.layers.shape.PadLayer |
SliceLayer |
Slices the input at a specific axis and at specific indices. |
ConcatLayer |
Concatenates multiple inputs along the specified axis. |
concat |
alias of lasagne.layers.merge.ConcatLayer |
ElemwiseMergeLayer |
This layer performs an elementwise merge of its input layers. |
ElemwiseSumLayer |
This layer performs an elementwise sum of its input layers. |
LocalResponseNormalization2DLayer |
Cross-channel Local Response Normalization for 2D feature maps. |
BatchNormLayer |
Batch Normalization |
batch_norm |
Apply batch normalization to an existing layer. |
StandardizationLayer |
Standardize inputs to zero mean and unit variance: |
instance_norm |
Apply instance normalization to an existing layer. |
layer_norm |
Apply layer normalization to an existing layer. |
EmbeddingLayer |
A layer for word embeddings. |
NonlinearityLayer |
A layer that just applies a nonlinearity. |
BiasLayer |
A layer that just adds a (trainable) bias term. |
ScaleLayer |
A layer that scales its inputs by learned coefficients. |
standardize |
Convenience function for standardizing inputs by applying a fixed offset and scale. |
ExpressionLayer |
This layer provides boilerplate for a custom layer that applies a simple transformation to the input. |
InverseLayer |
The InverseLayer class performs inverse operations for a single layer of a neural network by applying the partial derivative of the layer to be inverted with respect to its input: transposed layer for a DenseLayer , deconvolutional layer for Conv2DLayer , Conv1DLayer ; or an unpooling layer for MaxPool2DLayer . |
TransformerLayer |
Spatial transformer layer |
TPSTransformerLayer |
Spatial transformer layer |
ParametricRectifierLayer |
A layer that applies parametric rectify nonlinearity to its input following [R33]. |
prelu |
Convenience function to apply parametric rectify to a given layer’s output. |
RandomizedRectifierLayer |
A layer that applies a randomized leaky rectify nonlinearity to its input. |
rrelu |
Convenience function to apply randomized rectify to a given layer’s output. |
corrmm.Conv2DMMLayer |
2D convolutional layer |
cuda_convnet.Conv2DCCLayer |
|
cuda_convnet.MaxPool2DCCLayer |
|
cuda_convnet.ShuffleBC01ToC01BLayer |
|
cuda_convnet.bc01_to_c01b |
|
cuda_convnet.ShuffleC01BToBC01Layer |
|
cuda_convnet.c01b_to_bc01 |
|
cuda_convnet.NINLayer_c01b |
dnn.Conv2DDNNLayer |
2D convolutional layer |
dnn.Conv3DDNNLayer |
3D convolutional layer |
dnn.MaxPool2DDNNLayer |
2D max-pooling layer |
dnn.Pool2DDNNLayer |
2D pooling layer |
dnn.MaxPool3DDNNLayer |
3D max-pooling layer |
dnn.Pool3DDNNLayer |
3D pooling layer |
dnn.SpatialPyramidPoolingDNNLayer |
Spatial Pyramid Pooling Layer |
dnn.BatchNormDNNLayer |
Batch Normalization |
dnn.batch_norm_dnn |
Apply cuDNN batch normalization to an existing layer. |