dacapo.experiments.architectures
Submodules
- dacapo.experiments.architectures.architecture
- dacapo.experiments.architectures.architecture_config
- dacapo.experiments.architectures.cnnectome_unet
- dacapo.experiments.architectures.cnnectome_unet_config
- dacapo.experiments.architectures.dummy_architecture
- dacapo.experiments.architectures.dummy_architecture_config
Classes
An abstract base class for defining the architecture of a neural network model. |
|
A class to represent the base configurations of any architecture. It is used to define the architecture of a neural network model. |
|
A dummy architecture configuration class used for testing purposes. |
|
A class used to represent a dummy architecture layer for a 3D CNN. |
|
This class configures the CNNectomeUNet based on |
|
A U-Net architecture for 3D or 4D data. The U-Net expects 3D or 4D tensors |
Package Contents
- class dacapo.experiments.architectures.Architecture(*args, **kwargs)
An abstract base class for defining the architecture of a neural network model. It is inherited from PyTorch’s Module and built-in class ABC (Abstract Base Classes). Other classes can inherit this class to define their own specific variations of architecture. It requires to implement several property methods, and also includes additional methods related to the architecture design.
- input_shape
The spatial input shape for the neural network architecture.
- Type:
Coordinate
- eval_shape_increase
The amount to increase the input shape during prediction.
- Type:
Coordinate
- num_in_channels
The number of input channels required by the architecture.
- Type:
int
- num_out_channels
The number of output channels provided by the architecture.
- Type:
int
- dims()
Returns the number of dimensions of the input shape.
- scale()
Scales the input voxel size as required by the architecture.
Note
The class is abstract and requires to implement the abstract methods.
- property input_shape: funlib.geometry.Coordinate
- Abstractmethod:
Abstract method to define the spatial input shape for the neural network architecture. The shape should not account for the channels and batch dimensions.
- Returns:
The spatial input shape.
- Return type:
Coordinate
- Raises:
NotImplementedError – If the method is not implemented in the derived class.
Examples
>>> input_shape = Coordinate((128, 128, 128)) >>> model = MyModel(input_shape)
Note
The method should be implemented in the derived class.
- property eval_shape_increase: funlib.geometry.Coordinate
Provides information about how much to increase the input shape during prediction.
- Returns:
An instance representing the amount to increase in each dimension of the input shape.
- Return type:
Coordinate
- Raises:
NotImplementedError – If the method is not implemented in the derived class.
Examples
>>> eval_shape_increase = Coordinate((0, 0, 0)) >>> model = MyModel(input_shape, eval_shape_increase)
Note
The method is optional and can be overridden in the derived class.
- property num_in_channels: int
- Abstractmethod:
Abstract method to return number of input channels required by the architecture.
- Returns:
Required number of input channels.
- Return type:
int
- Raises:
NotImplementedError – If the method is not implemented in the derived class.
Examples
>>> num_in_channels = 1 >>> model = MyModel(input_shape, num_in_channels)
Note
The method should be implemented in the derived class.
- property num_out_channels: int
- Abstractmethod:
Abstract method to return the number of output channels provided by the architecture.
- Returns:
Number of output channels.
- Return type:
int
- Raises:
NotImplementedError – If the method is not implemented in the derived class.
Examples
>>> num_out_channels = 1 >>> model = MyModel(input_shape, num_out_channels)
Note
The method should be implemented in the derived class.
- property dims: int
Returns the number of dimensions of the input shape.
- Returns:
The number of dimensions.
- Return type:
int
- Raises:
NotImplementedError – If the method is not implemented in the derived class.
Examples
>>> input_shape = Coordinate((128, 128, 128)) >>> model = MyModel(input_shape) >>> model.dims 3
Note
The method is optional and can be overridden in the derived class.
- scale(input_voxel_size: funlib.geometry.Coordinate) funlib.geometry.Coordinate
Method to scale the input voxel size as required by the architecture.
- Parameters:
input_voxel_size (Coordinate) – The original size of the input voxel.
- Returns:
The scaled voxel size.
- Return type:
Coordinate
- Raises:
NotImplementedError – If the method is not implemented in the derived class.
Examples
>>> input_voxel_size = Coordinate((1, 1, 1)) >>> model = MyModel(input_shape) >>> model.scale(input_voxel_size) Coordinate((1, 1, 1))
Note
The method is optional and can be overridden in the derived class.
- class dacapo.experiments.architectures.ArchitectureConfig
A class to represent the base configurations of any architecture. It is used to define the architecture of a neural network model.
- name
str a unique name for the architecture.
- verify()
validates the given architecture.
Note
The class is abstract and requires to implement the abstract methods.
- name: str
- verify() Tuple[bool, str]
A method to validate an architecture configuration.
- Returns:
A tuple of a boolean indicating if the architecture is valid and a message.
- Return type:
Tuple[bool, str]
- Raises:
NotImplementedError – If the method is not implemented in the derived class.
Examples
>>> config = ArchitectureConfig("MyModel") >>> is_valid, message = config.verify() >>> print(is_valid, message)
Note
The method should be implemented in the derived class.
- class dacapo.experiments.architectures.DummyArchitectureConfig
A dummy architecture configuration class used for testing purposes.
It extends the base class “ArchitectureConfig”. This class contains dummy attributes and always returns that the configuration is invalid when verified.
- architecture_type
A class attribute assigning the DummyArchitecture class to this configuration.
- Type:
- num_in_channels
The number of input channels. This is a dummy attribute and has no real functionality or meaning.
- Type:
int
- num_out_channels
The number of output channels. This is also a dummy attribute and has no real functionality or meaning.
- Type:
int
- verify(self) Tuple[bool, str]
This method is used to check whether this is a valid architecture configuration.
Note
This class is used to represent a DummyArchitectureConfig object in the system.
- architecture_type
- num_in_channels: int
- num_out_channels: int
- verify() Tuple[bool, str]
Verifies the configuration validity.
Since this is a dummy configuration for testing purposes, this method always returns False indicating that the configuration is invalid.
- Returns:
A tuple containing a boolean validity flag and a reason message string.
- Return type:
tuple
- Raises:
NotImplementedError – This method is not implemented in this class.
Examples
>>> dummy_architecture_config = DummyArchitectureConfig(num_in_channels=1, num_out_channels=1) >>> dummy_architecture_config.verify() (False, "This is a DummyArchitectureConfig and is never valid")
Note
This method is used to check whether this is a valid architecture configuration.
- class dacapo.experiments.architectures.DummyArchitecture(architecture_config)
A class used to represent a dummy architecture layer for a 3D CNN.
- channels_in
An integer representing the number of input channels.
- channels_out
An integer representing the number of output channels.
- conv
A 3D convolution object.
- input_shape
A coordinate object representing the shape of the input.
- forward(x)
Performs the forward pass of the network.
- num_in_channels()
Returns the number of input channels for this architecture.
- num_out_channels()
Returns the number of output channels for this architecture.
Note
This class is used to represent a dummy architecture layer for a 3D CNN.
- channels_in
- channels_out
- conv
- property input_shape
Returns the input shape for this architecture.
- Returns:
Input shape of the architecture.
- Return type:
Coordinate
- Raises:
NotImplementedError – This method is not implemented in this class.
Examples
>>> dummy_architecture.input_shape Coordinate(x=40, y=20, z=20)
Note
This method is used to return the input shape for this architecture.
- property num_in_channels
Returns the number of input channels for this architecture.
- Returns:
Number of input channels.
- Return type:
int
- Raises:
NotImplementedError – This method is not implemented in this class.
Examples
>>> dummy_architecture.num_in_channels 1
Note
This method is used to return the number of input channels for this architecture.
- property num_out_channels
Returns the number of output channels for this architecture.
- Returns:
Number of output channels.
- Return type:
int
- Raises:
NotImplementedError – This method is not implemented in this class.
Examples
>>> dummy_architecture.num_out_channels 1
Note
This method is used to return the number of output channels for this architecture.
- forward(x)
Perform the forward pass of the network.
- Parameters:
x – Input tensor.
- Returns:
Output tensor after the forward pass.
- Return type:
Tensor
- Raises:
NotImplementedError – This method is not implemented in this class.
Examples
>>> dummy_architecture = DummyArchitecture(architecture_config) >>> x = torch.randn(1, 1, 40, 20, 20) >>> dummy_architecture.forward(x)
Note
This method is used to perform the forward pass of the network.
- class dacapo.experiments.architectures.CNNectomeUNetConfig
This class configures the CNNectomeUNet based on https://github.com/saalfeldlab/CNNectome/blob/master/CNNectome/networks/unet_class.py
Includes support for super resolution via the upsampling factors.
- input_shape
Coordinate The shape of the data passed into the network during training.
- fmaps_out
int The number of channels produced by your architecture.
- fmaps_in
int The number of channels expected from the raw data.
- num_fmaps
int The number of feature maps in the top level of the UNet.
- fmap_inc_factor
int The multiplication factor for the number of feature maps for each level of the UNet.
- downsample_factors
List[Coordinate] The factors to downsample the feature maps along each axis per layer.
- kernel_size_down
Optional[List[Coordinate]] The size of the convolutional kernels used before downsampling in each layer.
- kernel_size_up
Optional[List[Coordinate]] The size of the convolutional kernels used before upsampling in each layer.
- _eval_shape_increase
Optional[Coordinate] The amount by which to increase the input size when just prediction rather than training. It is generally possible to significantly increase the input size since we don’t have the memory constraints of the gradients, the optimizer and the batch size.
- upsample_factors
Optional[List[Coordinate]] The amount by which to upsample the output of the UNet.
- constant_upsample
bool Whether to use a transpose convolution or simply copy voxels to upsample.
- padding
str The padding to use in convolution operations.
- use_attention
bool Whether to use attention blocks in the UNet. This is supported for 2D and 3D.
- architecture_type()
Returns the architecture type.
Note
The architecture_type attribute is set to CNNectomeUNet.
References
Saalfeld, S., Fetter, R., Cardona, A., & Tomancak, P. (2012).
- architecture_type
- input_shape: funlib.geometry.Coordinate
- fmaps_out: int
- fmaps_in: int
- num_fmaps: int
- fmap_inc_factor: int
- downsample_factors: List[funlib.geometry.Coordinate]
- constant_upsample: bool
- padding: str
- use_attention: bool
- batch_norm: bool
- class dacapo.experiments.architectures.CNNectomeUNet(architecture_config)
A U-Net architecture for 3D or 4D data. The U-Net expects 3D or 4D tensors shaped like:
``(batch=1, channels, [length,] depth, height, width)``.
This U-Net performs only “valid” convolutions, i.e., sizes of the feature maps decrease after each convolution. It will perfrom 4D convolutions as long as
length
is greater than 1. As soon aslength
is 1 due to a valid convolution, the time dimension will be dropped and tensors with(b, c, z, y, x)
will be use (and returned) from there on.- fmaps_in
The number of input channels.
- fmaps_out
The number of feature maps in the output layer. This is also the number of output feature maps. Stored in the
channels
dimension.
- num_fmaps
The number of feature maps in the first layer. This is also the number of output feature maps. Stored in the
channels
dimension.
- fmap_inc_factor
By how much to multiply the number of feature maps between layers. If layer 0 has
k
feature maps, layerl
will havek*fmap_inc_factor**l
.
- downsample_factors
List of tuples
(z, y, x)
to use to down- and up-sample the feature maps between layers.
- kernel_size_down
List of lists of kernel sizes. The number of sizes in a list determines the number of convolutional layers in the corresponding level of the build on the left side. Kernel sizes can be given as tuples or integer. If not given, each convolutional pass will consist of two 3x3x3 convolutions.
- Type:
optional
- kernel_size_up
List of lists of kernel sizes. The number of sizes in a list determines the number of convolutional layers in the corresponding level of the build on the right side. Within one of the lists going from left to right. Kernel sizes can be given as tuples or integer. If not given, each convolutional pass will consist of two 3x3x3 convolutions.
- Type:
optional
- activation
Which activation to use after a convolution. Accepts the name of any tensorflow activation function (e.g.,
ReLU
fortorch.nn.ReLU
).
- fov
Initial field of view in physical units
- Type:
optional
- voxel_size
Size of a voxel in the input data, in physical units
- Type:
optional
- num_heads
Number of decoders. The resulting U-Net has one single encoder path and num_heads decoder paths. This is useful in a multi-task learning context.
- Type:
optional
- constant_upsample
If set to true, perform a constant upsampling instead of a transposed convolution in the upsampling layers.
- Type:
optional
- padding
How to pad convolutions. Either ‘same’ or ‘valid’ (default).
- Type:
optional
- upsample_channel_contraction
When performing the ConvTranspose, whether to reduce the number of channels by the fmap_increment_factor. can be either bool or list of bools to apply independently per layer.
- activation_on_upsample
Whether or not to add an activation after the upsample operation.
- use_attention
Whether or not to use an attention block in the U-Net.
- Methods:
- forward(x):
Forward pass of the U-Net.
- scale(voxel_size):
Scale the voxel size according to the upsampling factors.
- input_shape:
Return the input shape of the U-Net.
- num_in_channels:
Return the number of input channels.
- num_out_channels:
Return the number of output channels.
- eval_shape_increase:
Return the increase in shape due to the U-Net.
- Note:
This class is a wrapper around the
CNNectomeUNetModule
class. TheCNNectomeUNetModule
class is the actual implementation of the U-Net architecture.
- fmaps_out
- fmaps_in
- num_fmaps
- fmap_inc_factor
- downsample_factors
- kernel_size_down
- kernel_size_up
- constant_upsample
- padding
- upsample_factors
- use_attention
- batch_norm
- unet
- property eval_shape_increase: funlib.geometry.Coordinate
The increase in shape due to the U-Net.
- Returns:
The increase in shape due to the U-Net.
- Raises:
AttributeError – If the increase in shape is not given.
Examples
>>> unet.eval_shape_increase (1, 1, 128, 128, 128)
Note
The increase in shape should be given as a tuple
(batch, channels, [length,] depth, height, width)
.
- module()
Create the U-Net module.
- Returns:
The U-Net module.
- Raises:
AttributeError – If the number of input channels is not given.
AttributeError – If the number of output channels is not given.
AttributeError – If the number of feature maps in the first layer is not given.
AttributeError – If the factor by which the number of feature maps increases between layers is not given.
AttributeError – If the downsample factors are not given.
AttributeError – If the kernel sizes for the down pass are not given.
AttributeError – If the kernel sizes for the up pass are not given.
AttributeError – If the constant upsample flag is not given.
AttributeError – If the padding is not given.
AttributeError – If the upsample factors are not given.
AttributeError – If the activation on upsample flag is not given.
AttributeError – If the use attention flag is not given.
Examples
>>> unet.module() CNNectomeUNetModule( in_channels=1, num_fmaps=24, num_fmaps_out=1, fmap_inc_factor=2, kernel_size_down=[[(3, 3, 3), (3, 3, 3)], [(3, 3, 3), (3, 3, 3)], [(3, 3, 3), (3, 3, 3)]], kernel_size_up=[[(3, 3, 3), (3, 3, 3)], [(3, 3, 3), (3, 3, 3)], [(3, 3, 3), (3, 3, 3)]], downsample_factors=[(2, 2, 2), (2, 2, 2), (2, 2, 2)], constant_upsample=False, padding='valid', activation_on_upsample=True, upsample_channel_contraction=[False, True, True], use_attention=False )
Note
The U-Net module is an instance of the
CNNectomeUNetModule
class.
- scale(voxel_size)
Scale the voxel size according to the upsampling factors.
- Parameters:
voxel_size (tuple) – The size of a voxel in the input data.
- Returns:
The scaled voxel size.
- Raises:
ValueError – If the voxel size is not given.
Examples
>>> unet.scale((1, 1, 1)) (1, 1, 1)
Note
The voxel size should be given as a tuple
(z, y, x)
.
- property input_shape: funlib.geometry.Coordinate
Return the input shape of the U-Net.
- Returns:
The input shape of the U-Net.
- Raises:
AttributeError – If the input shape is not given.
Examples
>>> unet.input_shape (1, 1, 128, 128, 128)
Note
The input shape should be given as a tuple
(batch, channels, [length,] depth, height, width)
.
- property num_in_channels: int
Return the number of input channels.
- Returns:
The number of input channels.
- Raises:
AttributeError – If the number of input channels is not given.
Examples
>>> unet.num_in_channels 1
Note
The number of input channels should be given as an integer.
- property num_out_channels: int
Return the number of output channels.
- Returns:
The number of output channels.
- Raises:
AttributeError – If the number of output channels is not given.
Examples
>>> unet.num_out_channels 1
Note
The number of output channels should be given as an integer.
- forward(x)
Forward pass of the U-Net.
- Parameters:
x (Tensor) – The input tensor.
- Returns:
The output tensor.
- Raises:
RuntimeError – If the tensors have different dimensions.
Examples
>>> unet = CNNectomeUNet(architecture_config) >>> x = torch.randn(1, 1, 64, 64, 64) >>> unet(x)
Note
The input tensor should be given as a 5D tensor.