niftynet.io.image_reader module

This module loads images from csv files and outputs numpy arrays.

infer_tf_dtypes(image_array)[source]

Choosing a suitable tf dtype based on the dtype of input numpy array.

class ImageReader(names)[source]

Bases: niftynet.layer.base_layer.Layer

For a concrete example:

_input_sources define multiple modality mappings, e.g.,
_input_sources {'image': ('T1', 'T2'), 'label': ('manual_map',)}

means:

‘image’ consists of two components, formed by concatenating ‘T1’ and ‘T2’ input source images. ‘label’ consists of one component, loading from ‘manual_map’

Parameters:
  • self._names – a tuple of the output names of this reader. ('image', 'labels')
  • self._shapes – the shapes after combining input sources {'image': (192, 160, 192, 1, 2), 'label': (192, 160, 192, 1, 1)}
  • self._dtypes – store the dictionary of tensorflow shapes {'image': tf.float32, 'label': tf.float32}
  • self.output_list

    a list of dictionaries, with each item:

    {'image': <niftynet.io.image_type.SpatialImage4D object>,
    

    ’label’: <niftynet.io.image_type.SpatialImage3D object>}

initialise(data_param, task_param, file_list)[source]

task_param specifies how to combine user input modalities. e.g., for multimodal segmentation ‘image’ corresponds to multiple modality sections, ‘label’ corresponds to one modality section

This function converts elements of file_list into dictionaries of image objects, and save them to self.output_list.

prepare_preprocessors()[source]

Some preprocessors requires an initial step to initialise data dependent internal parameters.

This function find these preprocessors and run the initialisations.

add_preprocessing_layers(layers)[source]

Adding a niftynet.layer or a list of layers as preprocessing steps.

layer_op(idx=None, shuffle=True)[source]

this layer returns dictionaries:

keys: self.output_fields
values: image volume array
shapes

Image shapes before any preprocessing.

Returns:tuple of integers as image shape

Caution

To have fast access, the spatial dimensions are not accurate

  1. only read from the first image in list
  2. not considering effects of random augmentation layers
    but time and modality dimensions should be correct
tf_dtypes

Infer input data dtypes in TF (using the first image in the file list).

input_sources

returns mapping of input keywords and input sections e.g., input_sources:

{'image': ('T1', 'T2'),
 'label': ('manual_map',)}

map task parameter keywords image and label to section names T1, T2, and manual_map respectively.

names

return – the keys of self.input_sources dictionary

get_subject_id(image_index)[source]

Given an integer id returns the subject id.