niftynet.engine.image_window_buffer module

This module define queues that stores training/evaluation images (and labels)

class niftynet.engine.image_window_buffer.InputBatchQueueRunner(capacity, shuffle=True)

Bases: object

This class defines a light wrapper around queue objects for input windows, and the coordinates describes the original location of the window

After initialisation, run_threads() can be called with tf.session and tf.coordinator to start generating samples with multiple threads.

The sampling threads can be stopped by: close_all() called externally – all threads quit immediately

close_all()

This function stops all threads immediately and close the queue. Further enqueue/dequeue operation raises errors

pop_batch_op()

This function is used when connecting a sampler output to a network. e.g.,

data_dict = self.get_sampler()[0].pop_batch_op(device_id) net_output = net_model(data_dict, is_training)

Note it squeezes the output tensor of 6 dims [batch, x, y, z, time, modality] by removing all dims along which length is one.

Parameters:device_id – integer representing the GPU
Returns:a tensorflow graph op
run_threads(session, coord, num_threads=1)

This function should be called by application.driver, where a session and coordinator is maintained, it starts sampling threads to fill the queue.

Note that the threads will be blocked if there’s no dequeue_op runnning, or number of samples is less than the dequeue batch size.

Parameters:
  • session – a tensorflow session
  • coord – a tensorflow coordinator
  • num_threads – integer specifies the number of threads
Returns: