niftynet.application.base_application module¶
Interface of NiftyNet application
-
class
BaseApplication
[source]¶ Bases:
object
BaseApplication represents an interface.
Each application
type_str
should support to use the standard training and inference driver.-
REQUIRED_CONFIG_SECTION
= None¶
-
SUPPORTED_PHASES
= set([u'evaluation', u'inference', u'training'])¶
-
is_validation
= None¶
-
readers
= None¶
-
sampler
= None¶
-
net
= None¶
-
optimiser
= None¶
-
gradient_op
= None¶
-
output_decoder
= None¶
-
outputs_collector
= None¶
-
gradients_collector
= None¶
-
total_loss
= None¶
-
patience
= None¶
-
performance_history
= []¶
-
mode
= None¶
-
initialise_dataset_loader
(data_param=None, task_param=None, data_partitioner=None)[source]¶ this function initialise self.readers
Parameters: - data_param – input modality specifications
- task_param – contains task keywords for grouping data_param
- data_partitioner – specifies train/valid/infer splitting if needed
Returns:
-
initialise_sampler
()[source]¶ Samplers take
self.reader
as input and generates sequences of ImageWindow that will be fed to the networksThis function sets
self.sampler
.
-
initialise_network
()[source]¶ This function create an instance of network and sets
self.net
Returns: None
-
connect_data_and_network
(outputs_collector=None, gradients_collector=None)[source]¶ Adding sampler output tensor and network tensors to the graph.
Parameters: - outputs_collector –
- gradients_collector –
Returns:
-
interpret_output
(batch_output)[source]¶ Implement output interpretations, e.g., save to hard drive cache output windows.
Parameters: batch_output – outputs by running the tf graph Returns: True indicates the driver should continue the loop False indicates the drive should stop
-
add_inferred_output_like
(data_param, task_param, name)[source]¶ This function adds entries to parameter objects to enable the evaluation action to automatically read in the output of a previous inference run if inference is not explicitly specified.
This can be used in an application if there is a data section entry in the configuration file that matches the inference output. In supervised learning, the reference data section would often match the inference output and could be used here. Otherwise, a template data section could be used.
Parameters: - data_param –
- task_param –
- name – name of input parameter to copy parameters from
Returns: modified data_param and task_param
-
set_iteration_update
(iteration_message)[source]¶ At each iteration
application_driver
calls:output = tf.session.run(variables_to_eval, feed_dict=data_dict)
to evaluate TF graph elements, where
variables_to_eval
anddata_dict
are retrieved fromiteration_message.ops_to_run
anditeration_message.data_feed_dict
(In addition to the variables collected by self.output_collector).The output of tf.session.run(…) will be stored at
iteration_message.current_iter_output
, and can be accessed fromengine.handler_network_output.OutputInterpreter
.override this function for more complex operations (such as learning rate decay) according to
iteration_message.current_iter
.
-
add_validation_flag
()[source]¶ Add a TF placeholder for switching between train/valid graphs, this function sets
self.is_validation
.self.is_validation
can be used by applications.Returns:
-
action
¶ A string indicating the action in train/inference/evaluation
Returns:
-
is_training
¶ boolean value indicating if the phase is training
Type: return
-
is_inference
¶ boolean value indicating if the phase is inference
Type: return
-
is_evaluation
¶ boolean value indicating if the action is evaluation
Type: return
-