niftynet.layer.additive_upsample module

class AdditiveUpsampleLayer(new_size, n_splits, name='linear_additive_upsample')[source]

Bases: niftynet.layer.base_layer.Layer

Implementation of bilinear (or trilinear) additive upsampling layer, described in paper:

Wojna et al., The devil is in the decoder, https://arxiv.org/abs/1707.05847

In the paper 2D images are upsampled by a factor of 2 and n_splits = 4

__init__(new_size, n_splits, name='linear_additive_upsample')[source]
Parameters:
  • new_size – integer or a list of integers set the output 2D/3D spatial shape. If the parameter is an integer d, it’ll be expanded to (d, d) and (d, d, d) for 2D and 3D inputs respectively.
  • n_splits – integer, the output tensor will have C / n_splits channels, where C is the number of channels of the input. (n_splits must evenly divide C.)
  • name – (optional) name of the layer
layer_op(input_tensor)[source]

If the input has the shape batch, X, Y,[ Z,] Channels, the output will be batch, new_size_x, new_size_y,[ new_size_z,] channels/n_splits.

Parameters:input_tensor – 2D/3D image tensor, with shape: batch, X, Y,[ Z,] Channels
Returns:linearly additively upsampled volumes
class ResidualUpsampleLayer(kernel_size=3, stride=2, n_splits=2, w_initializer=None, w_regularizer=None, acti_func='relu', name='residual_additive_upsample')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

Implementation of the upsampling layer with residual like connections, described in paper:

Wojna et al., The devil is in the decoder, https://arxiv.org/abs/1707.05847
layer_op(input_tensor, is_training=True)[source]

output is an elementwise sum of deconvolution and additive upsampling:

--(inputs)--o--deconvolution-------+--(outputs)--
            |                      |
            o--additive upsampling-o
Parameters:
  • input_tensor
  • is_training
Returns:

an upsampled tensor with n_input_channels/n_splits feature channels.