niftynet.engine.image_window_dataset module

Creating tf.data.Dataset instance for image window sampler.

class ImageWindowDataset(reader=None, window_sizes=None, batch_size=1, windows_per_image=1, queue_length=10, shuffle=True, epoch=-1, smaller_final_batch_mode='pad', seed=None, name='image_dataset')[source]

Bases: niftynet.layer.base_layer.Layer

This class creates a tf.data.Dataset instance from a sampler’s layer_op function or generator.

If from_generator, Dataset.from_generator interface will be used, Dataset.map interface will be used otherwise:

if the windows are from a image reader,
the total number of windows produced
will be: `epoch x n_subjects x windows_per_image`

if the windows are from a generator,
the total number of windows produced
will be: "iterations from the generator" x num_threads
shapes

the sampler output (value of layer_op) is:

[windows_per_image, x, y, z, 1, channels]

returns a dictionary of sampler output shapes

tf_shapes

returns a dictionary of sampler output tensor shapes

tf_dtypes

returns a dictionary of sampler output tensorflow dtypes

set_num_threads(num_threads)[source]

Set number windows to generate in parallel.

layer_op(idx=None)[source]

Generating each image as a window. Overriding this function to create new image sampling strategies.

This function should either yield or return a dictionary (of multiple windows per image):

return a dictionary:
{
 'image_name': a numpy array [n_samples, h, w, d, chn],
 'image_name_location': [n_samples, 7]
}

where the 7-element location vector encode the image_id, starting and ending coordinates of the image window.

Following the same notation, the dictionary can be extended to multiple modalities; the keys will be:

{'image_name_1', 'image_name_1_location',
 'image_name_2', 'image_name_2_location', ...}
Parameters:idx – image_id used to load the image at the i-th row of the input
Returns:a image data dictionary
pop_batch_op()[source]

This function is used when connecting a sampler output to a network. e.g.:

data_dict = self.get_sampler()[0].pop_batch_op(device_id)
net_output = net_model(data_dict['image'], is_training)

Caution

Note it squeezes the output tensor of 6 dims [batch, x, y, z, time, modality] by removing all dims along which length is one.

Returns:a dictionary of image window tensors.
init_dataset()[source]

Make a window samples dataset from the reader and layer_op. This function sets self.dataset.

Returns:
dataset_preprocessing(dataset)[source]

dataset: batch and shuffle

Parameters:dataset – a tf.data.Dataset instance
Returns:a tf.data.Dataset instance
run_threads(*_args, **_kwargs)[source]

This function is created for compatibility purposes

(Deprecating)

Parameters:
  • _args
  • _kwargs
Returns:

close_all()[source]

For compatibility with the queue-based sampler.

classmethod dummy_coordinates(image_id, image_sizes, n_samples)[source]

This function returns a set of image window coordinates which are just spatially from 0 to image_sizes.

Returns:a numpy array of n_samples spatial coordinates