niftynet.layer.residual_unit module

class ResidualUnit(n_output_chns=1, kernel_size=3, dilation=1, acti_func='relu', w_initializer=None, w_regularizer=None, moving_decay=0.9, eps=1e-05, type_string='bn_acti_conv', name='res-downsample')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

__init__(n_output_chns=1, kernel_size=3, dilation=1, acti_func='relu', w_initializer=None, w_regularizer=None, moving_decay=0.9, eps=1e-05, type_string='bn_acti_conv', name='res-downsample')[source]

Implementation of residual unit presented in:

[1] He et al., Identity mapping in deep residual networks, ECCV 2016 [2] He et al., Deep residual learning for image recognition, CVPR 2016

The possible types of connections are:

'original': residual unit presented in [2]
'conv_bn_acti': ReLU before addition presented in [1]
'acti_conv_bn': ReLU-only pre-activation presented in [1]
'bn_acti_conv': full pre-activation presented in [1]

[1] recommends ‘bn_acti_conv’

Parameters:
  • n_output_chns – number of output feature channels if this doesn’t match the input, a 1x1 projection will be created.
  • kernel_size
  • dilation
  • acti_func
  • w_initializer
  • w_regularizer
  • moving_decay
  • eps
  • type_string
  • name
layer_op(inputs, is_training=True)[source]

The general connections is:

(inputs)--o-conv_0--conv_1-+-- (outputs)
          |                |
          o----------------o

conv_0, conv_1 layers are specified by type_string.