niftynet.evaluation.segmentation_evaluations module

This module defines built-in evaluation functions for segmentation applications

Segmentations can be evaluated at several scales: ‘foreground’ refering to metrics computed once for a foreground label ‘label’ refering to metrics computed once for each label (including background) ‘cc’ referring to metrics computed once for each connected component set

Connected components are defined by one-or-more connected components on the reference segmentation and one-or-more connected components on the infered segmentation. These sets are defined by a cc_func. Currently this is hard coded to be union_of_seg_for_each_ref_cc which takes each connected component on the reference segmentation and all connected components on the infered segmentation with any overlap. This will eventually be made into a factory option for different cc set definitions

Overlap and distance measures can be computed at each of these levels by deriving from PerComponentEvaluation, which handles the logic of identifying which comparisons need to be done for each scale.

Overlap and distance measures are computed in two convenience functions (compute_many_overlap_metrics and compute_many_distance_metrics) and wrapped by Evaluation classes

class PerComponentEvaluation(reader, app_param, eval_param)[source]

Bases: niftynet.evaluation.base_evaluations.CachedSubanalysisEvaluation

This class represents evaluations performed on binary segmentation components computed per label or per connected component. It encodes the generation of evaluation tasks. Derived classes should define the metric_name constant and the function metric_from_binarized()

subanalyses(subject_id, data)[source]
layer_op(subject_id, data, task)[source]
metric_dict_from_binarized(seg, ref)[source]

Computes a metric from a binarized mask :param seg: numpy array with binary mask from inferred segmentation :param ref: numpy array with binary mask from reference segmentation :return: a dictionary of metric_name:metric_value

class PerComponentScalarEvaluation(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.PerComponentEvaluation

This class simplifies the implementation when the metric just returns a single scalar with the same name as the class name

metric_dict_from_binarized(seg, ref)[source]

Wrap computed metric in dictionary for parent class

metric_from_binarized(seg, ref)[source]

Computer scalar metric value :param seg: numpy array with binary mask from inferred segmentation :param ref: numpy array with binary mask from reference segmentation :return: scalar metric value

get_aggregations()[source]
class BuiltinOverlapEvaluation(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.PerComponentScalarEvaluation

Wrapper class to encode many similar overlap metrics that can be computed from a confusion matrix Metrics computed in compute_many_overlap_metrics can be wrapped by overriding self.metric_name

metric_from_binarized(seg, ref)[source]

Computes a metric from a binarized mask by computing a confusion matrix and then delegating the metric computation :param seg: numpy array with binary mask from inferred segmentation :param ref: numpy array with binary mask from reference segmentation :return: scalar metric value

metric_from_confusion_matrix(confusion_matrix)[source]

Compute metrics from a 2x2 confusion matrix :param confusion_matrix: 2x2 numpy array :return: scalar metric value

class n_pos_ref(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class n_neg_ref(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class n_pos_seg(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class n_neg_seg(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class fp(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class fn(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class tp(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class tn(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class n_intersection(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class n_union(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class specificity(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class sensitivity(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class accuracy(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class false_positive_rate(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class positive_predictive_values(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class negative_predictive_values(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class dice(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
Dice

alias of dice

class jaccard(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
intersection_over_union

alias of jaccard

Jaccard

alias of jaccard

class informedness(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class markedness(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class vol_diff(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.BuiltinOverlapEvaluation

metric_from_confusion_matrix(M)[source]
class average_distance(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.PerComponentScalarEvaluation

metric_from_binarized(seg, ref)[source]
class hausdorff_distance(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.PerComponentScalarEvaluation

metric_from_binarized(seg, ref)[source]
class hausdorff95_distance(*args, **kwargs)[source]

Bases: niftynet.evaluation.segmentation_evaluations.PerComponentScalarEvaluation

metric_from_binarized(seg, ref)[source]
union_of_seg_for_each_ref_cc(blobs_seg, blobs_ref)[source]

Constructs connected component sets to compute metrics for. Each reference connected component is paired with the union of inferred segmentation connected components with any overlap :param blobs_seg: tuple (numbered_cc_array, number_of_ccs) :param blobs_ref: tuple (numbered_cc_array, number_of_ccs) :return: dictionary {label:(ref_label_list, seg_label_list)}