niftynet.layer.loss_segmentation module¶
Loss functions for multi-class segmentation
-
class
LossFunction
(n_class, loss_type='Dice', loss_func_params=None, name='loss_function')[source]¶ Bases:
niftynet.layer.base_layer.Layer
-
layer_op
(prediction, ground_truth=None, weight_map=None, var_scope=None)[source]¶ Compute loss from prediction and ground truth, the computed loss map are weighted by weight_map.
if prediction `is list of tensors, each element of the list will be compared against `ground_truth and the weighted by weight_map.
Parameters: - prediction – input will be reshaped into (N, num_classes)
- ground_truth – input will be reshaped into (N,)
- weight_map – input will be reshaped into (N,)
- var_scope –
Returns:
-
-
generalised_dice_loss
(prediction, ground_truth, weight_map=None, type_weight='Square')[source]¶ - Function to calculate the Generalised Dice Loss defined in
- Sudre, C. et. al. (2017) Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations. DLMIA 2017
Parameters: - prediction – the logits (before softmax)
- ground_truth – the segmentation ground truth
- weight_map –
- type_weight – type of weighting allowed between labels (choice between Square (square of inverse of volume), Simple (inverse of volume) and Uniform (no weighting))
Returns: the loss
-
sensitivity_specificity_loss
(prediction, ground_truth, weight_map=None, r=0.05)[source]¶ Function to calculate a multiple-ground_truth version of the sensitivity-specificity loss defined in “Deep Convolutional Encoder Networks for Multiple Sclerosis Lesion Segmentation”, Brosch et al, MICCAI 2015, https://link.springer.com/chapter/10.1007/978-3-319-24574-4_1
error is the sum of r(specificity part) and (1-r)(sensitivity part)
Parameters: - prediction – the logits (before softmax).
- ground_truth – segmentation ground_truth.
- r – the ‘sensitivity ratio’ (authors suggest values from 0.01-0.10 will have similar effects)
Returns: the loss
-
cross_entropy
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the cross-entropy loss function
Parameters: - prediction – the logits (before softmax)
- ground_truth – the segmentation ground truth
- weight_map –
Returns: the cross-entropy loss
-
wasserstein_disagreement_map
(prediction, ground_truth, M)[source]¶ Function to calculate the pixel-wise Wasserstein distance between the flattened pred_proba and the flattened labels (ground_truth) with respect to the distance matrix on the label space M.
Parameters: - prediction – the logits after softmax
- ground_truth – segmentation ground_truth
- M – distance matrix on the label space
Returns: the pixelwise distance map (wass_dis_map)
-
generalised_wasserstein_dice_loss
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the Generalised Wasserstein Dice Loss defined in
Fidon, L. et. al. (2017) Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation using Holistic Convolutional Networks.MICCAI 2017 (BrainLes)Parameters: - prediction – the logits (before softmax)
- ground_truth – the segmentation ground_truth
- weight_map –
Returns: the loss
-
dice_nosquare
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the classical dice loss
Parameters: - prediction – the logits (before softmax)
- ground_truth – the segmentation ground_truth
- weight_map –
Returns: the loss
-
dice
(prediction, ground_truth, weight_map=None)[source]¶ Function to calculate the dice loss with the definition given in
Milletari, F., Navab, N., & Ahmadi, S. A. (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation. 3DV 2016using a square in the denominator
Parameters: - prediction – the logits (before softmax)
- ground_truth – the segmentation ground_truth
- weight_map –
Returns: the loss