This module loads images from csv files and outputs numpy arrays.


Choosing a suitable tf dtype based on the dtype of input numpy array.

class ImageReader(names=None)[source]

Bases: niftynet.layer.base_layer.Layer

For a concrete example:

_input_sources define multiple modality mappings, e.g.,
_input_sources {'image': ('T1', 'T2'), 'label': ('manual_map',)}


‘image’ consists of two components, formed by concatenating ‘T1’ and ‘T2’ input source images. ‘label’ consists of one component, loading from ‘manual_map’

  • self._names – a tuple of the output names of this reader. ('image', 'labels')
  • self._shapes – the shapes after combining input sources {'image': (192, 160, 192, 1, 2), 'label': (192, 160, 192, 1, 1)}
  • self._dtypes – store the dictionary of tensorflow shapes {'image': tf.float32, 'label': tf.float32}
  • self.output_list

    a list of dictionaries, with each item:

    {'image': < object>,

    ’label’: < object>}

initialise(data_param, task_param=None, file_list=None)[source]

task_param specifies how to combine user input modalities. e.g., for multimodal segmentation ‘image’ corresponds to multiple modality sections, ‘label’ corresponds to one modality section

This function converts elements of file_list into dictionaries of image objects, and save them to self.output_list. e.g.:

data_param = {'T1': {'path_to_search': 'path/to/t1'}
              'T2': {'path_to_search': 'path/to/t2'}}

loads pairs of T1 and T1 images (grouped by matching the filename). The reader’s output is in the form of {'T1': np.array, 'T2': np.array}. If the (optional) task_param is specified:

task_param = {'image': ('T1', 'T2')}

the reader loads pairs of T1 and T2 and returns the concatenated image (both modalities should have the same spatial dimensions). The reader’s output is in the form of {'image': np.array}.

  • data_param – dictionary of input sections
  • task_param – dictionary of grouping
  • file_list – a dataframe generated by ImagePartitioner for cross validation, so that the reader only loads files in training/inference phases.

the initialised reader instance


Some preprocessors requires an initial step to initialise data dependent internal parameters.

This function find these preprocessors and run the initialisations.


Adding a niftynet.layer or a list of layers as preprocessing steps.

layer_op(idx=None, shuffle=True)[source]

this layer returns dictionaries:

keys: self.output_fields
values: image volume array

Number of spatial dimensions of the images.

Returns:integers of spatial rank

Image shapes before any preprocessing.

Returns:tuple of integers as image shape


To have fast access, the spatial dimensions are not accurate

  1. only read from the first image in list
  2. not considering effects of random augmentation layers
    but time and modality dimensions should be correct

Infer input data dtypes in TF (using the first image in the file list).


returns mapping of input keywords and input sections e.g., input_sources:

{'image': ('T1', 'T2'),
 'label': ('manual_map',)}

map task parameter keywords image and label to section names T1, T2, and manual_map respectively.


the keys of self.input_sources dictionary


number of subjects in the reader


Given an integer id returns the subject id.


Given a subject id, return the file_list index :param subject_id: a string with the subject id :return: an int with the file list index


Given an integer id returns the corresponding row of the file list. returns: a dictionary of the row


Validate the user input input_data_param raise an error if it’s invalid.

Returns:input data specifications as a nested dictionary