niftynet.io.image_reader module¶
This module loads images from csv files and outputs numpy arrays.
-
infer_tf_dtypes
(image_array)[source]¶ Choosing a suitable tf dtype based on the dtype of input numpy array.
-
class
ImageReader
(names=None)[source]¶ Bases:
niftynet.layer.base_layer.Layer
For a concrete example:
_input_sources define multiple modality mappings, e.g., _input_sources {'image': ('T1', 'T2'), 'label': ('manual_map',)}
means:
‘image’ consists of two components, formed by concatenating ‘T1’ and ‘T2’ input source images. ‘label’ consists of one component, loading from ‘manual_map’
Parameters: - self._names – a tuple of the output names of this reader.
('image', 'labels')
- self._shapes – the shapes after combining input sources
{'image': (192, 160, 192, 1, 2), 'label': (192, 160, 192, 1, 1)}
- self._dtypes – store the dictionary of tensorflow shapes
{'image': tf.float32, 'label': tf.float32}
- self.output_list –
a list of dictionaries, with each item:
{'image': <niftynet.io.image_type.SpatialImage4D object>,
’label’: <niftynet.io.image_type.SpatialImage3D object>}
-
initialise
(data_param, task_param=None, file_list=None)[source]¶ task_param
specifies how to combine user input modalities. e.g., for multimodal segmentation ‘image’ corresponds to multiple modality sections, ‘label’ corresponds to one modality sectionThis function converts elements of
file_list
into dictionaries of image objects, and save them toself.output_list
. e.g.:data_param = {'T1': {'path_to_search': 'path/to/t1'} 'T2': {'path_to_search': 'path/to/t2'}}
loads pairs of T1 and T1 images (grouped by matching the filename). The reader’s output is in the form of
{'T1': np.array, 'T2': np.array}
. If the (optional)task_param
is specified:task_param = {'image': ('T1', 'T2')}
the reader loads pairs of T1 and T2 and returns the concatenated image (both modalities should have the same spatial dimensions). The reader’s output is in the form of
{'image': np.array}
.Parameters: - data_param – dictionary of input sections
- task_param – dictionary of grouping
- file_list – a dataframe generated by ImagePartitioner for cross validation, so that the reader only loads files in training/inference phases.
Returns: the initialised reader instance
-
prepare_preprocessors
()[source]¶ Some preprocessors requires an initial step to initialise data dependent internal parameters.
This function find these preprocessors and run the initialisations.
-
add_preprocessing_layers
(layers)[source]¶ Adding a
niftynet.layer
or a list of layers as preprocessing steps.
-
layer_op
(idx=None, shuffle=True)[source]¶ this layer returns dictionaries:
keys: self.output_fields values: image volume array
-
spatial_ranks
¶ Number of spatial dimensions of the images.
Returns: integers of spatial rank
-
shapes
¶ Image shapes before any preprocessing.
Returns: tuple of integers as image shape Caution
To have fast access, the spatial dimensions are not accurate
- only read from the first image in list
- not considering effects of random augmentation layers
- but time and modality dimensions should be correct
-
tf_dtypes
¶ Infer input data dtypes in TF (using the first image in the file list).
-
input_sources
¶ returns mapping of input keywords and input sections e.g., input_sources:
{'image': ('T1', 'T2'), 'label': ('manual_map',)}
map task parameter keywords
image
andlabel
to section namesT1
,T2
, andmanual_map
respectively.
-
names
¶ the keys of
self.input_sources
dictionaryType: return
-
num_subjects
¶ number of subjects in the reader
Type: return
- self._names – a tuple of the output names of this reader.