niftynet.network.vnet module¶
-
class
VNet
(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='relu', name='VNet')[source]¶ Bases:
niftynet.network.base_net.BaseNet
- ### Description
- implementation of V-Net:
- Milletari et al., “V-Net: Fully convolutional neural networks for volumetric medical image segmentation”, 3DV ‘16
### Building Blocks (n)[dBLOCK] - Downsampling VNet block with n conv layers (kernel size = 5,
with residual connections, activation = relu as default) followed by downsampling conv layer (kernel size = 2, stride = 2) + final activation- (n)[uBLOCK] - Upsampling VNet block with n conv layers (kernel size = 5,
- with residual connections, activation = relu as default) followed by deconv layer (kernel size = 2, stride = 2) + final activation
- (n)[sBLOCK] - VNet block with n conv layers (kernel size = 5,
- with residual connections, activation = relu as default) followed by 1x1x1 conv layer (kernel size = 1, stride = 1) + final activation
### Diagram
- INPUT –> (1)[dBLOCK] - - - - - - - - - - - - - - - - (1)[sBLOCK] –> OUTPUT
- (2)[dBLOCK] - - - - - - - - - - - - (2)[uBLOCK]
-
- (3)[dBLOCK] - - - - - - - (3)[uBLOCK]
-
- (3)[dBLOCK] - - - (3)[uBLOCK]
-
—-(3)[uBLOCk] —-
- ### Constraints
- Input size should be divisible by 8
- Input should be either 2D or 3D
-
__init__
(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='relu', name='VNet')[source]¶ Parameters: - num_classes – int, number of channels of output
- w_initializer – weight initialisation for network
- w_regularizer – weight regularisation for network
- b_initializer – bias initialisation for network
- b_regularizer – bias regularisation for network
- acti_func – activation function to use
- name – layer name
-
layer_op
(images, is_training=True, layer_id=-1, **unused_kwargs)[source]¶ Parameters: - images – tensor to input to the network. Size has to be divisible by 8
- is_training – boolean, True if network is in training mode
- layer_id – not in use
- unused_kwargs – other conditional arguments, not in use
Returns: tensor, network output
-
class
VNetBlock
(func, n_conv, n_feature_chns, n_output_chns, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='relu', name='vnet_block')[source]¶ Bases:
niftynet.layer.base_layer.TrainableLayer
-
__init__
(func, n_conv, n_feature_chns, n_output_chns, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='relu', name='vnet_block')[source]¶ Parameters: - func – string, defines final block operation (Downsampling, upsampling, same)
- n_conv – int, number of conv layers to apply
- n_feature_chns – int, number of feature channels (output channels) for each conv layer
- n_output_chns – int, number of output channels of the final block operation (func)
- w_initializer – weight initialisation of convolutional layers
- w_regularizer – weight regularisation of convolutional layers
- b_initializer – bias initialisation of convolutional layers
- b_regularizer – bias regularisation of convolutional layers
- acti_func – activation function to use
- name – layer name
-