niftynet.network.scalenet module

class ScaleNet(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='prelu', name='ScaleNet')[source]

Bases: niftynet.network.base_net.BaseNet

implementation of ScaleNet:
Fidon et al., “Scalable multimodal convolutional networks for brain tumour segmentation”, MICCAI ‘17

### Diagram

INPUT –> [BACKEND] —-> [MERGING] —-> [FRONTEND] —> OUTPUT

[BACKEND] and [MERGING] are provided by the ScaleBlock below [FRONTEND]: it can be any NiftyNet network (default: HighRes3dnet)

### Constraints: - Input image size should be divisible by 8 - more than one modality should be used

__init__(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='prelu', name='ScaleNet')[source]
Parameters:
  • num_classes – int, number of channels of output
  • w_initializer – weight initialisation for network
  • w_regularizer – weight regularisation for network
  • b_initializer – bias initialisation for network
  • b_regularizer – bias regularisation for network
  • acti_func – activation function to use
  • name – layer name
layer_op(images, is_training=True, layer_id=-1, **unused_kwargs)[source]
Parameters:
  • images – tensor, concatenation of multiple input modalities
  • is_training – boolean, True if network is in training mode
  • layer_id – not in use
  • unused_kwargs
Returns:

predicted tensor

class ScaleBlock(func, n_layers=1, w_initializer=None, w_regularizer=None, acti_func='relu', name='scaleblock')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

Implementation of the ScaleBlock described in Fidon et al., “Scalable multimodal convolutional

networks for brain tumour segmentation”, MICCAI ‘17

See Fig 2(a) for diagram details - SN BackEnd

__init__(func, n_layers=1, w_initializer=None, w_regularizer=None, acti_func='relu', name='scaleblock')[source]
Parameters:
  • func – merging function (SUPPORTED_OP: MAX, AVERAGE)
  • n_layers – int, number of layers
  • w_initializer – weight initialisation for network
  • w_regularizer – weight regularisation for network
  • acti_func – activation function to use
  • name – layer name
layer_op(input_tensor, is_training)[source]
Parameters:
  • input_tensor – tensor, input to the network
  • is_training – boolean, True if network is in training mode
Returns:

merged tensor after backend layers