niftynet.network.highres3dnet module

class HighRes3DNet(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='prelu', name='HighRes3DNet')[source]

Bases: niftynet.network.base_net.BaseNet

implementation of HighRes3DNet:

Li et al., “On the compactness, efficiency, and representation of 3D convolutional networks: Brain parcellation as a pretext task”, IPMI ‘17

### Building blocks

{ } - Residual connections: see He et al. “Deep residual learning for
image recognition”, in CVPR ‘16
[CONV] - Convolutional layer in form: Activation(Convolution(X))

where X = input tensor or output of previous layer

and Activation is a function which includes:

  1. Batch-Norm
  2. Activation Function (ReLu, PreLu, Sigmoid, Tanh etc.)
  3. Drop-out layer by sampling random variables from a Bernouilli distribution if p < 1

[CONV*] - Convolutional layer with no activation function

(r)[D-CONV(d)] - Convolutional layer with dilated convolutions with blocks in

pre-activation mode: D-Convolution(Activation(X)) see He et al., “Identity Mappings in Deep Residual Networks”, ECCV ‘16

dilation factor = d D-CONV(2) : dilated convolution with dilation factor 2

repeat factor = r e.g. (2)[D-CONV(d)] : 2 dilated convolutional layers in a row [D-CONV] -> [D-CONV] { (2)[D-CONV(d)] } : 2 dilated convolutional layers within residual block

### Diagram

INPUT –> [CONV] –> { (3)[D-CONV(1)] } –> { (3)[D-CONV(2)] } –> { (3)[D-CONV(4)] } -> [CONV*] -> Loss

### Constraints - Input image size should be divisible by 8

__init__(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='prelu', name='HighRes3DNet')[source]
Parameters:
  • num_classes – int, number of channels of output
  • w_initializer – weight initialisation for network
  • w_regularizer – weight regularisation for network
  • b_initializer – bias initialisation for network
  • b_regularizer – bias regularisation for network
  • acti_func – activation function to use
  • name – layer name
layer_op(images, is_training=True, layer_id=-1, **unused_kwargs)[source]
Parameters:
  • images – tensor to input to the network. Size has to be divisible by 8
  • is_training – boolean, True if network is in training mode
  • layer_id – int, index of the layer to return as output
  • unused_kwargs
Returns:

output of layer indicated by layer_id

class HighResBlock(n_output_chns, kernels=(3, 3), acti_func='relu', w_initializer=None, w_regularizer=None, with_res=True, name='HighResBlock')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

This class defines a high-resolution block with residual connections kernels

  • specify kernel sizes of each convolutional layer
  • e.g.: kernels=(5, 5, 5) indicate three conv layers of kernel_size 5

with_res

  • whether to add residual connections to bypass the conv layers
__init__(n_output_chns, kernels=(3, 3), acti_func='relu', w_initializer=None, w_regularizer=None, with_res=True, name='HighResBlock')[source]
Parameters:
  • n_output_chns – int, number of output channels
  • kernels – list of layer kernel sizes
  • acti_func – activation function to use
  • w_initializer – weight initialisation for network
  • w_regularizer – weight regularisation for network
  • with_res – boolean, set to True if residual connection are to use
  • name – layer name
layer_op(input_tensor, is_training)[source]
Parameters:
  • input_tensor – tensor, input to the network
  • is_training – boolean, True if network is in training mode
Returns:

tensor, output of the residual block