niftynet.layer.loss_regression module¶
Loss functions for regression

class
LossFunction
(loss_type='L2Loss', loss_func_params=None, name='loss_function')[source]¶ Bases:
niftynet.layer.base_layer.Layer

layer_op
(prediction, ground_truth=None, weight_map=None)[source]¶ Compute loss from
prediction
andground truth
, the computed loss map are weighted byweight_map
.if
prediction
is list of tensors, each element of the list will be compared againstground_truth` and the weighted by ``weight_map
.Parameters:  prediction – input will be reshaped into
(batch_size, N_voxels, num_classes)
 ground_truth – input will be reshaped into
(batch_size, N_voxels)
 weight_map – input will be reshaped into
(batch_size, N_voxels)
Returns:  prediction – input will be reshaped into


l1_loss
(prediction, ground_truth, weight_map=None)[source]¶ Parameters:  prediction – the current prediction of the ground truth.
 ground_truth – the measurement you are approximating with regression.
Returns: mean of the l1 loss across all voxels.

l2_loss
(prediction, ground_truth, weight_map=None)[source]¶ Parameters:  prediction – the current prediction of the ground truth.
 ground_truth – the measurement you are approximating with regression.
Returns: sum(differences squared) / 2  Note, no square root

rmse_loss
(prediction, ground_truth, weight_map=None)[source]¶ Parameters:  prediction – the current prediction of the ground truth.
 ground_truth – the measurement you are approximating with regression.
 weight_map – a weight map for the cost function. .
Returns: sqrt(mean(differences squared))

mae_loss
(prediction, ground_truth, weight_map=None)[source]¶ Parameters:  prediction – the current prediction of the ground truth.
 ground_truth – the measurement you are approximating with regression.
 weight_map – a weight map for the cost function. .
Returns: mean(abs(ground_truthprediction))

huber_loss
(prediction, ground_truth, delta=1.0, weight_map=None)[source]¶ The Huber loss is a smooth piecewise loss function that is quadratic for
x <= delta
, and linear forx> delta
See https://en.wikipedia.org/wiki/Huber_loss .Parameters:  prediction – the current prediction of the ground truth.
 ground_truth – the measurement you are approximating with regression.
 delta – the point at which quadratic>linear transition happens.
Returns:

smooth_l1_loss
(prediction, ground_truth, weight_map=None, value_thresh=0.5)[source]¶ Similarly to the Huber loss, the residuals are squared below a threshold value. In addition they are square above the inverse of this threshold :param prediction: the current prediction of the ground truth. :param ground_truth: the measurement you are approximating with regression. :param weight_map: :return: mean of the l1 loss across all voxels.

cosine_loss
(prediction, ground_truth, weight_map=None, to_complete=True)[source]¶  Cosine loss between predicted and ground_truth vectors. The predicted and
 targeted vectors should be unit vectors
Parameters:  prediction –
 ground_truth –
 weight_map –
 to_complete – if the unit vector is to be completed
Returns: