niftynet.network.holistic_net module

class HolisticNet(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='elu', name='HolisticNet')[source]

Bases: niftynet.network.base_net.BaseNet

### Description Implementation of HolisticNet detailed in Fidon, L. et. al. (2017) Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation using Holistic Convolutional Networks. MICCAI 2017 (BrainLes)

### Diagram Blocks [CONV] - 3x3x3 Convolutional layer in form: Activation(Convolution(X))

where X = input tensor or output of previous layer

and Activation is a function which includes:

  1. Batch-Norm
  2. Activation Function (Elu, ReLu, PreLu, Sigmoid, Tanh etc.)
[D-CONV(d)] - 3x3x3 Convolutional layer with dilated convolutions with blocks in

pre-activation mode: D-Convolution(Activation(X)) see He et al., “Identity Mappings in Deep Residual Networks”, ECCV ‘16

dilation factor = d D-CONV(2) : dilated convolution with dilation factor 2

repeat factor = r e.g. (2)[D-CONV(d)] : 2 dilated convolutional layers in a row [D-CONV] -> [D-CONV] { (2)[D-CONV(d)] } : 2 dilated convolutional layers within residual block

[SCORE] - Batch-Norm + 3x3x3 Convolutional layer + Activation function + 1x1x1 Convolutional layer

[MERGE] - Channel-wise merging

### Diagram

MULTIMODAL INPUT —– [CONV]x3 —–[D-CONV(2)]x3 —– MaxPooling —– [CONV]x3 —–[D-CONV(2)]x3
| | |
[SCORE] [SCORE] [SCORE] [SCORE]
| | |

[MERGE] –> OUTPUT

### Constraints - Input image size should be divisible by 8

### Comments - The network returns only the merged output, so the loss will be applied only to this (different from the referenced paper)

__init__(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='elu', name='HolisticNet')[source]
Parameters:
  • num_classes – int, number of channels of output
  • w_initializer – weight initialisation for network
  • w_regularizer – weight regularisation for network
  • b_initializer – bias initialisation for network
  • b_regularizer – bias regularisation for network
  • acti_func – activation function to use
  • name – layer name
layer_op(input_tensor, is_training=True, layer_id=-1, **unused_kwargs)[source]
Parameters:
  • input_tensor – tensor, input to the network
  • is_training – boolean, True if network is in training mode
  • layer_id – not in use
  • unused_kwargs
Returns:

fused prediction from multiple scales

class ScoreLayer(num_features=None, w_initializer=None, w_regularizer=None, num_classes=1, acti_func='elu', name='ScoreLayer')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

__init__(num_features=None, w_initializer=None, w_regularizer=None, num_classes=1, acti_func='elu', name='ScoreLayer')[source]
Parameters:
  • num_features – int, number of features
  • w_initializer – weight initialisation for network
  • w_regularizer – weight regularisation for network
  • num_classes – int, number of prediction channels
  • acti_func – activation function to use
  • name – layer name
layer_op(input_tensor, is_training, layer_id=-1)[source]
Parameters:
  • input_tensor – tensor, input to the layer
  • is_training – boolean, True if network is in training mode
  • layer_id – not is use
Returns:

tensor with number of channels to num_classes

class MergeLayer(func, w_initializer=None, w_regularizer=None, acti_func='elu', name='MergeLayer')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

__init__(func, w_initializer=None, w_regularizer=None, acti_func='elu', name='MergeLayer')[source]
Parameters:
  • func – type of merging layer (SUPPORTED_OPS: AVERAGE, WEIGHTED_AVERAGE, MAXOUT)
  • w_initializer – weight initialisation for network
  • w_regularizer – weight regularisation for network
  • acti_func – activation function to use
  • name – layer name
layer_op(roots)[source]

Performs channel-wise merging of input tensors :param roots: tensors to be merged :return: fused tensor