niftynet.evaluation.base_evaluator module

This module defines base classes for Evaluator classes which define the logic for iterating through the subjects and requested metrics needed for evaluation

class BaseEvaluator(reader, app_param, eval_param)[source]

Bases: object

The base evaluator defines a simple evaluations that iterates through subjects and computes each metric in sequence

Sub-classes should overload the default_evaluation_list with application-specific metrics If a particular ordering of computations per subject is needed, sub-class can override the evaluate_next method; if a particular ordering of subjects is needed, subclasses can override the evaluate method.

evaluate()[source]

This method loops through all subjects and computes the metrics for each subject.

Returns:a dictionary of pandas.DataFrame objects
evaluate_from_generator(generator)[source]

This method loops through all subjects and computes the metrics for each subject.

Returns:a dictionary of pandas.DataFrame objects
evaluate_next(subject_id, data, interp_orders)[source]

Computes metrics for one subject.

Parameters:
  • subject_id
  • data – data dictionary passed to each evaluation
  • interp_orders – metadata for the data dictionary [currently not used]
Returns:

a list of pandas.DataFrame objects

aggregate(dataframes)[source]

Apply all of the iterations requested by the evaluations

Parameters:dataframes – a list of pandas.DataFrame objects
Returns:a dictionary of pandas.DataFrame objects after aggregation
default_evaluation_list()[source]
Returns:List of EvaluationFactory strings defining the evaluations

to compute if no evaluations are specified in the configuration

class CachedSubanalysisEvaluator(reader, app_param, eval_param)[source]

Bases: niftynet.evaluation.base_evaluator.BaseEvaluator

This evaluator sequences evaluations in a way that is friendly for caching intermediate computations. Each evaluation defines sub-analyses to run, and all subanalysis are run at the same time then the cache is cleared

evaluate_next(subject_id, data, interp_orders)[source]

Computes metrics for one subject. Instead of iterating through the metrics in order, this method first identifies sub-analyses that should be run together (for caching reasons) and iterates through the sub-analyses in sequence, calculating the metrics for each sub-analysis together

Parameters:
  • subject_id
  • data – data dictionary passed to each evaluation
  • interp_orders – metadata for the data dictionary [currently not used]
Returns:

a list of pandas.DataFrame objects

class DataFrameAggregator(group_by, func)[source]

Bases: object

This class defines a simple aggregator that operates on groups of entries in a pandas dataframe

func should accept a dataframe and return a list of dataframes with appropriate indices

__init__(group_by, func)[source]
Parameters:
  • group_by – level at which original metric was computed, e.g. (‘subject_id’, ‘label’)
  • func – function (dataframe=>dataframe) to aggregate the collected metrics
class ScalarAggregator(key, group_by, new_group_by, func, name)[source]

Bases: niftynet.evaluation.base_evaluator.DataFrameAggregator

This class defines a simple aggregator that groups metrics and applies an aggregating function. Grouping is determined by the set difference between an original group_by term and a subset new_group_py term.

__init__(key, group_by, new_group_by, func, name)[source]
Parameters:
  • key – metric heading name with values to aggregate
  • group_by – level at which original metric was computed, e.g. (‘subject_id’, ‘label’)
  • new_group_by – level at which metric after aggregation is computed, e.g. (‘label’)
  • func – function (iterable=>scalar) to aggregate the collected

values e.g., np.mean :param name: new heading name for the aggregated metric

scalar_wrapper_(pdf)[source]

For each unique value of pdf.loc[:,new_group_by], aggregate the values using self.func