niftynet.layer.channel_sparse_convolution module¶
-
class
niftynet.layer.channel_sparse_convolution.
ChannelSparseBNLayer
(n_dense_channels, *args, **kwargs)¶ Bases:
niftynet.layer.bn.BNLayer
Channel sparse convolutions perform convolulations over a subset of image channels and generate a subset of output channels. This enables spatial dropout without wasted computations
-
layer_op
(inputs, is_training, mask, use_local_stats=False)¶ Parameters: inputs: image to normalize. This typically represents a sparse subset of channels
from a sparse convolution.- is_training: boolean that is True during training. When True, the layer uses batch
- statistics for normalization and records a moving average of means and variances. When False, the layer uses previously computed moving averages for normalization
mask: 1-Tensor with a binary mask identifying the sparse channels represented in inputs
-
-
class
niftynet.layer.channel_sparse_convolution.
ChannelSparseConvLayer
(*args, **kwargs)¶ Bases:
niftynet.layer.convolution.ConvLayer
Channel sparse convolutions perform convolulations over a subset of image channels and generate a subset of output channels. This enables spatial dropout without wasted computations
-
layer_op
(input_tensor, input_mask, output_mask)¶ Parameters: input_tensor: image to convolve with kernel input_mask: 1-Tensor with a binary mask of input channels to use
If this is None, all channels are used.- output_mask: 1-Tensor with a binary mask of output channels to generate
- If this is None, all channels are used and the number of output channels is set at graph-creation time.
-
-
class
niftynet.layer.channel_sparse_convolution.
ChannelSparseConvolutionalLayer
(n_output_chns, kernel_size=3, stride=1, dilation=1, padding='SAME', with_bias=False, with_bn=True, acti_func=None, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, moving_decay=0.9, eps=1e-05, name='conv')¶ Bases:
niftynet.layer.base_layer.TrainableLayer
- This class defines a composite layer with optional components:
- channel sparse convolution -> batchwise-spatial dropout -> batch_norm -> activation
The b_initializer and b_regularizer are applied to the ChannelSparseConvLayer The w_initializer and w_regularizer are applied to the ChannelSparseConvLayer, the batch normalisation layer, and the activation layer (for ‘prelu’)
-
layer_op
(input_tensor, input_mask=None, is_training=None, keep_prob=None)¶
-
class
niftynet.layer.channel_sparse_convolution.
ChannelSparseDeconvLayer
(*args, **kwargs)¶ Bases:
niftynet.layer.deconvolution.DeconvLayer
Channel sparse convolutions perform convolulations over a subset of image channels and generate a subset of output channels. This enables spatial dropout without wasted computations
-
layer_op
(input_tensor, input_mask=None, output_mask=None)¶ Parameters: input_tensor: image to convolve with kernel input_mask: 1-Tensor with a binary mask of input channels to use
If this is None, all channels are used.- output_mask: 1-Tensor with a binary mask of output channels to generate
- If this is None, all channels are used and the number of output channels is set at graph-creation time.
-