class UNet3D(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='leakyrelu', name='NoNewNet')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

Implementation of No New-Net

Isensee et al., “No New-Net”, MICCAI BrainLesion Workshop 2018.

The major changes between this and our standard 3d U-Net: * input size == output size: padded convs are used * leaky relu as non-linearity * reduced number of filters before upsampling * instance normalization (not batch) * fits 128x128x128 with batch size of 2 on one TitanX GPU for training * no learned upsampling: linear resizing.

layer_op(thru_tensor, is_training=True, **unused_kwargs)[source]
  • thru_tensor – the input is modified in-place as it goes through the network
  • is_training
  • unused_kwargs

class UNetBlock(func, n_chns, kernels, w_initializer=None, w_regularizer=None, with_downsample_branch=False, acti_func='leakyrelu', name='UNet_block')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

layer_op(thru_tensor, is_training)[source]