niftynet.application.base_application module

Interface of NiftyNet application

class SingletonApplication[source]

Bases: type

class BaseApplication[source]

Bases: object

BaseApplication represents an interface.

Each application type_str should support to use the standard training and inference driver.

REQUIRED_CONFIG_SECTION = None
is_training = True
is_validation = None
readers = None
sampler = None
net = None
optimiser = None
gradient_op = None
output_decoder = None
check_initialisations()[source]
initialise_dataset_loader(data_param=None, task_param=None, data_partitioner=None)[source]

this function initialise self.readers

Parameters:
  • data_param – input modality specifications
  • task_param – contains task keywords for grouping data_param
  • data_partitioner – specifies train/valid/infer splitting if needed
Returns:

initialise_sampler()[source]

Samplers take self.reader as input and generates sequences of ImageWindow that will be fed to the networks

This function sets self.sampler.

initialise_network()[source]

This function create an instance of network and sets self.net

Returns:None
connect_data_and_network(outputs_collector=None, gradients_collector=None)[source]

Adding sampler output tensor and network tensors to the graph.

Parameters:
  • outputs_collector
  • gradients_collector
Returns:

interpret_output(batch_output)[source]

Implement output interpretations, e.g., save to hard drive cache output windows.

Parameters:batch_output – outputs by running the tf graph
Returns:True indicates the driver should continue the loop False indicates the drive should stop
set_network_gradient_op(gradients)[source]

create gradient op by optimiser.apply_gradients this function sets self.gradient_op.

Override this function for more complex optimisations such as using different optimisers for sub-networks.

Parameters:gradients – processed gradients from the gradient_collector
Returns:
stop()[source]

stop the sampling threads if there’s any.

Returns:
set_iteration_update(iteration_message)[source]
At each iteration application_driver calls:
output = tf.session.run(variables_to_eval, feed_dict=data_dict)

to evaluate TF graph elements, where variables_to_eval and data_dict are retrieved from application_iteration.IterationMessage.ops_to_run and application_iteration.IterationMessage.data_feed_dict. in addition to the variables collected by output_collector; implemented in application_driver.run_vars)

This function (is called before tf.session.run by the driver) provides an interface for accessing variables_to_eval and data_dict at each iteration.

Override this function for more complex operations according to application_iteration.IterationMessage.current_iter.

get_sampler()[source]

get samplers of the application

Returns:niftynet.engine.sampler_* instances
add_validation_flag()[source]

add a TF placeholder for switching between train/valid graphs, this function sets self.is_validation. self.is_validation can be used by applications.

Returns: