niftynet.network.no_new_net module¶
-
class
UNet3D
(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='leakyrelu', name='NoNewNet')[source]¶ Bases:
niftynet.layer.base_layer.TrainableLayer
- Implementation of No New-Net
Isensee et al., “No New-Net”, MICCAI BrainLesion Workshop 2018.
The major changes between this and our standard 3d U-Net: * input size == output size: padded convs are used * leaky relu as non-linearity * reduced number of filters before upsampling * instance normalization (not batch) * fits 128x128x128 with batch size of 2 on one TitanX GPU for training * no learned upsampling: linear resizing.