niftynet.network.highres3dnet module

class HighRes3DNet(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='prelu', name='HighRes3DNet')[source]

Bases: niftynet.network.base_net.BaseNet

implementation of HighRes3DNet:

Li et al., “On the compactness, efficiency, and representation of 3D convolutional networks: Brain parcellation as a pretext task”, IPMI ‘17

### Building blocks

{ } - Residual connections: see He et al. “Deep residual learning for
image recogntion”, in CVPR ‘16
[CONV] - Convolutional layer in form: Activation(Convolution(X))

where X = input tensor or output of previous layer

and Activation is a function which includes:

  1. Batch-Norm
  2. Activation Function (ReLu, PreLu, Sigmoid, Tanh etc.)
  3. Drop-out layer by sampling random variables from a Bernouilli distribution if p < 1

[CONV*] - Convolutional layer with no activation function

(r)[D-CONV(d)] - Convolutional layer with dilated convolutions with blocks in

pre-activation mode: D-Convolution(Activation(X)) see He et al., “Identity Mappings in Deep Residual Networks”, ECCV ‘16

dilation factor = d D-CONV(2) : dilated convolution with dilation factor 2

repeat factor = r e.g. (2)[D-CONV(d)] : 2 dilated convolutional layers in a row [D-CONV] -> [D-CONV] { (2)[D-CONV(d)] } : 2 dilated convolutional layers within residual block

### Diagram

INPUT –> [CONV] –> { (3)[D-CONV(1)] } –> { (3)[D-CONV(2)] } –> { (3)[D-CONV(4)] } -> [CONV*] -> Loss

layer_op(images, is_training=True, layer_id=-1, **unused_kwargs)[source]
class HighResBlock(n_output_chns, kernels=(3, 3), acti_func='relu', w_initializer=None, w_regularizer=None, with_res=True, name='HighResBlock')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

This class define a high-resolution block with residual connections kernels

  • specify kernel sizes of each convolutional layer
  • e.g.: kernels=(5, 5, 5) indicate three conv layers of kernel_size 5

with_res

  • whether to add residual connections to bypass the conv layers
layer_op(input_tensor, is_training)[source]