niftynet.network.dense_vnet module

class DenseVNet(num_classes, hyperparams={}, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='relu', name='DenseVNet')[source]

Bases: niftynet.network.base_net.BaseNet

### Description implementation of Dense-V-Net:

Gibson et al., “Automatic multi-organ segmentation on abdominal CT with dense V-networks” 2018

### Diagram

DFS = Dense Feature Stack Block

  • Initial image is first downsampled to a given size.
  • Each DFS+SD outputs a skip link + a downsampled output.
  • All outputs are upscaled to the initial downsampled size.
  • If initial prior is given add it to the output prediction.
Input

–[ DFS ]———————–[ Conv ]————[ Conv ]——[+]–>
| | |
—–[ DFS ]—————[ Conv ]—— | |
| |
—–[ DFS ]——-[ Conv ]——— |
[ Prior ]—

The layer DenseFeatureStackBlockWithSkipAndDownsample layer implements [DFS + Conv + Downsampling] in a single module, and outputs 2 elements:

  • Skip layer: [ DFS + Conv]
  • Downsampled output: [ DFS + Down]

### Constraints - Input size has to be divisible by 2*dilation_rates

__init__(num_classes, hyperparams={}, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='relu', name='DenseVNet')[source]
Parameters:
  • num_classes – int, number of channels of output
  • hyperparams – dictionary, network hyperparameters
  • w_initializer – weight initialisation for network
  • w_regularizer – weight regularisation for network
  • b_initializer – bias initialisation for network
  • b_regularizer – bias regularisation for network
  • acti_func – activation function to use
  • name – layer name
create_network()[source]
layer_op(input_tensor, is_training=True, layer_id=-1, keep_prob=0.5, **unused_kwargs)[source]
Parameters:
  • input_tensor – tensor to input to the network, size has to be divisible by 2*dilation_rates
  • is_training – boolean, True if network is in training mode
  • layer_id – not in use
  • keep_prob – double, percentage of nodes to keep for drop-out
  • unused_kwargs
Returns:

network prediction

class SpatialPriorBlock(prior_shape, output_shape, name='spatial_prior_block')[source]

Bases: niftynet.layer.base_layer.TrainableLayer

__init__(prior_shape, output_shape, name='spatial_prior_block')[source]
Parameters:
  • prior_shape – shape of spatial prior
  • output_shape – target shape for resampling
  • name – layer name
layer_op()[source]
Returns:spatial prior resampled to the target shape
class DenseFeatureStackBlock(n_dense_channels, kernel_size, dilation_rates, use_bdo, name='dense_feature_stack_block', **kwargs)[source]

Bases: niftynet.layer.base_layer.TrainableLayer

Dense Feature Stack Block

  • Stack is initialized with the input from above layers.
  • Iteratively the output of convolution layers is added to the feature stack.
  • Each sequential convolution is performed over all the previous stacked channels.

Diagram example:

feature_stack = [Input] feature_stack = [feature_stack, conv(feature_stack)] feature_stack = [feature_stack, conv(feature_stack)] feature_stack = [feature_stack, conv(feature_stack)] … Output = [feature_stack, conv(feature_stack)]
__init__(n_dense_channels, kernel_size, dilation_rates, use_bdo, name='dense_feature_stack_block', **kwargs)[source]
Parameters:
  • n_dense_channels – int, number of dense channels in each block
  • kernel_size – kernel size for convolutional layers
  • dilation_rates – dilation rate of each layer in each vblock
  • use_bdo – boolean, set to True to use batch-wise drop-out
  • name – tensorflow scope name
  • kwargs
create_block()[source]
Returns:dense feature stack block
layer_op(input_tensor, is_training=True, keep_prob=None)[source]
Parameters:
  • input_tensor – tf tensor, input to the DenseFeatureStackBlock
  • is_training – boolean, True if network is in training mode
  • keep_prob – double, percentage of nodes to keep for drop-out
Returns:

feature stack

class DenseFeatureStackBlockWithSkipAndDownsample(n_dense_channels, kernel_size, dilation_rates, n_seg_channels, n_down_channels, use_bdo, name='dense_feature_stack_block', **kwargs)[source]

Bases: niftynet.layer.base_layer.TrainableLayer

Dense Feature Stack with Skip Layer and Downsampling

  • Downsampling is done through strided convolution.
—[ DenseFeatureStackBlock ]———-[ Conv ]——- Skip layer

——————– Downsampled Output

See DenseFeatureStackBlock for more info.

__init__(n_dense_channels, kernel_size, dilation_rates, n_seg_channels, n_down_channels, use_bdo, name='dense_feature_stack_block', **kwargs)[source]
Parameters:
  • n_dense_channels – int, number of dense channels
  • kernel_size – kernel size for convolutional layers
  • dilation_rates – dilation rate of each layer in each vblock
  • n_seg_channels – int, number of segmentation channels
  • n_down_channels – int, number of output channels when downsampling
  • use_bdo – boolean, set to True to use batch-wise drop-out
  • name – layer name
  • kwargs
create_block()[source]
Returns:Dense Feature Stack with Skip Layer and Downsampling block
layer_op(input_tensor, is_training=True, keep_prob=None)[source]
Parameters:
  • input_tensor – tf tensor, input to the DenseFeatureStackBlock
  • is_training – boolean, True if network is in training mode
  • keep_prob – double, percentage of nodes to keep for drop-out
Returns:

feature stack after skip convolution, feature stack after downsampling