niftynet.layer.deconvolution module

class niftynet.layer.deconvolution.DeconvLayer(n_output_chns, kernel_size=3, stride=1, padding='SAME', with_bias=False, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, name='deconv')

Bases: niftynet.layer.base_layer.TrainableLayer

This class defines a simple deconvolution with an optional bias term. Please consider DeconvolutionalLayer if batch_norm and activation are also used.

layer_op(input_tensor)
class niftynet.layer.deconvolution.DeconvolutionalLayer(n_output_chns, kernel_size=3, stride=1, padding='SAME', with_bias=False, with_bn=True, acti_func=None, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, moving_decay=0.9, eps=1e-05, name='deconv')

Bases: niftynet.layer.base_layer.TrainableLayer

This class defines a composite layer with optional components:
deconvolution -> batch_norm -> activation -> dropout

The b_initializer and b_regularizer are applied to the DeconvLayer The w_initializer and w_regularizer are applied to the DeconvLayer, the batch normalisation layer, and the activation layer (for ‘prelu’)

layer_op(input_tensor, is_training=None, keep_prob=None)
niftynet.layer.deconvolution.default_b_initializer()
niftynet.layer.deconvolution.default_w_initializer()
niftynet.layer.deconvolution.infer_output_dims(input_dims, strides, kernel_sizes, padding)

infer output dims from list, the dim can be different in different directions. Note: dilation is not considerted here.