niftynet.network.highres3dnet_large module¶
-
class
HighRes3DNetLarge
(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='relu', name='HighRes3DNet')[source]¶ Bases:
niftynet.network.base_net.BaseNet
implementation of HighRes3DNet:
Li et al., “On the compactness, efficiency, and representation of 3D convolutional networks: Brain parcellation as a pretext task”, IPMI ‘17(This is a larger version with an additional 3x3x3 convolution)
### Constraints - Input image size should be divisible by 8
-
__init__
(num_classes, w_initializer=None, w_regularizer=None, b_initializer=None, b_regularizer=None, acti_func='relu', name='HighRes3DNet')[source]¶ Parameters: - num_classes – int, number of channels of output
- w_initializer – weight initialisation for network
- w_regularizer – weight regularisation for network
- b_initializer – bias initialisation for network
- b_regularizer – bias regularisation for network
- acti_func – activation function to use
- name – layer name
-
layer_op
(images, is_training=True, layer_id=-1, keep_prob=0.5, **unused_kwargs)[source]¶ Parameters: - images – tensor to input to the network. Size has to be divisible by 8
- is_training – boolean, True if network is in training mode
- layer_id – int, index of the layer to return as output
- keep_prob – double, percentage of nodes to keep for drop-out
- unused_kwargs –
Returns: output of layer indicated by layer_id
-