niftynet.layer.loss_segmentation module¶
Loss functions for multiclass segmentation

class
LossFunction
(n_class, loss_type='Dice', softmax=True, loss_func_params=None, name='loss_function')[source]¶ Bases:
niftynet.layer.base_layer.Layer

layer_op
(prediction, ground_truth, weight_map=None)[source]¶ Compute loss from prediction and ground truth, the computed loss map are weighted by weight_map.
if prediction `is list of tensors, each element of the list will be compared against `ground_truth and the weighted by weight_map. (Assuming the same gt and weight across scales)
Parameters:  prediction – input will be reshaped into
(batch_size, N_voxels, num_classes)
 ground_truth – input will be reshaped into
(batch_size, N_voxels, ...)
 weight_map – input will be reshaped into
(batch_size, N_voxels, ...)
Returns:  prediction – input will be reshaped into


labels_to_one_hot
(ground_truth, num_classes=1)[source]¶ Converts ground truth labels to onehot, sparse tensors. Used extensively in segmentation losses.
Parameters:  ground_truth – ground truth categorical labels (rank N)
 num_classes – A scalar defining the depth of the one hot dimension (see depth of tf.one_hot)
Returns: onehot sparse tf tensor (rank N+1; new axis appended at the end)

undecided_loss
(prediction, ground_truth, weight_map=None)[source]¶ Parameters:  prediction –
 ground_truth –
 weight_map –
Returns:

volume_enforcement
(prediction, ground_truth, weight_map=None, eps=0.001, hard=False)[source]¶ Computing a volume enforcement loss to ensure that the obtained volumes are close and avoid empty results when something is expected :param prediction: :param ground_truth: labels :param weight_map: potential weight map to apply :param eps: epsilon to use as regulariser :return:

volume_enforcement_fin
(prediction, ground_truth, weight_map=None, eps=0.001)[source]¶  Computing a volume enforcement loss to ensure that the obtained volumes are
 close and avoid empty results when something is expected
Parameters:  prediction –
 ground_truth –
 weight_map –
 eps –
Returns:

generalised_dice_loss
(prediction, ground_truth, weight_map=None, type_weight='Square')[source]¶  Function to calculate the Generalised Dice Loss defined in
 Sudre, C. et. al. (2017) Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations. DLMIA 2017
Parameters:  prediction – the logits
 ground_truth – the segmentation ground truth
 weight_map –
 type_weight – type of weighting allowed between labels (choice between Square (square of inverse of volume), Simple (inverse of volume) and Uniform (no weighting))
Returns: the loss

dice_plus_xent_loss
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the loss used in https://arxiv.org/pdf/1809.10486.pdf, nonew net, Isenseee et al (used to win the Medical Imaging Decathlon).
It is the sum of the crossentropy and the Diceloss.
Parameters:  prediction – the logits
 ground_truth – the segmentation ground truth
 weight_map –
Returns: the loss (cross_entropy + Dice)

sensitivity_specificity_loss
(prediction, ground_truth, weight_map=None, r=0.05)[source]¶ Function to calculate a multipleground_truth version of the sensitivityspecificity loss defined in “Deep Convolutional Encoder Networks for Multiple Sclerosis Lesion Segmentation”, Brosch et al, MICCAI 2015, https://link.springer.com/chapter/10.1007/9783319245744_1
error is the sum of r(specificity part) and (1r)(sensitivity part)
Parameters:  prediction – the logits
 ground_truth – segmentation ground_truth.
 r – the ‘sensitivity ratio’ (authors suggest values from 0.010.10 will have similar effects)
Returns: the loss

cross_entropy
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the crossentropy loss function
Parameters:  prediction – the logits (before softmax)
 ground_truth – the segmentation ground truth
 weight_map –
Returns: the crossentropy loss

wasserstein_disagreement_map
(prediction, ground_truth, weight_map=None, M=None)[source]¶ Function to calculate the pixelwise Wasserstein distance between the flattened prediction and the flattened labels (ground_truth) with respect to the distance matrix on the label space M.
Parameters:  prediction – the logits after softmax
 ground_truth – segmentation ground_truth
 M – distance matrix on the label space
Returns: the pixelwise distance map (wass_dis_map)

generalised_wasserstein_dice_loss
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the Generalised Wasserstein Dice Loss defined in
Fidon, L. et. al. (2017) Generalised Wasserstein Dice Score for Imbalanced Multiclass Segmentation using Holistic Convolutional Networks.MICCAI 2017 (BrainLes)Parameters:  prediction – the logits
 ground_truth – the segmentation ground_truth
 weight_map –
Returns: the loss

dice
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the dice loss with the definition given in
Milletari, F., Navab, N., & Ahmadi, S. A. (2016) Vnet: Fully convolutional neural networks for volumetric medical image segmentation. 3DV 2016using a square in the denominator
Parameters:  prediction – the logits
 ground_truth – the segmentation ground_truth
 weight_map –
Returns: the loss

dice_nosquare
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the classical dice loss
Parameters:  prediction – the logits
 ground_truth – the segmentation ground_truth
 weight_map –
Returns: the loss

tversky
(prediction, ground_truth, weight_map=None, alpha=0.5, beta=0.5)[source]¶ Function to calculate the Tversky loss for imbalanced data
Sadegh et al. (2017)
Tversky loss function for image segmentation using 3D fully convolutional deep networks
Parameters:  prediction – the logits
 ground_truth – the segmentation ground_truth
 alpha – weight of false positives
 beta – weight of false negatives
 weight_map –
Returns: the loss

dice_dense
(prediction, ground_truth, weight_map=None)[source]¶ Computing meanclass Dice similarity.
Parameters:  prediction – last dimension should have
num_classes
 ground_truth – segmentation ground truth (encoded as a binary matrix)
last dimension should be
num_classes
 weight_map –
Returns: 1.0  mean(Dice similarity per class)
 prediction – last dimension should have

dice_dense_nosquare
(prediction, ground_truth, weight_map=None)[source]¶ Computing meanclass Dice similarity with no square terms in the denominator
Parameters:  prediction – last dimension should have
num_classes
 ground_truth – segmentation ground truth (encoded as a binary matrix)
last dimension should be
num_classes
 weight_map –
Returns: 1.0  mean(Dice similarity per class)
 prediction – last dimension should have