dacapo.compute_context.local_torch
Classes
The LocalTorch class is a subclass of the ComputeContext class. |
Module Contents
- class dacapo.compute_context.local_torch.LocalTorch
The LocalTorch class is a subclass of the ComputeContext class. It is used to specify the context in which computations are to be done. LocalTorch is used to specify that computations are to be done on the local machine using PyTorch.
- _device
This stores the type of device on which torch computations are to be done. It can
- Type:
Optional[str]
- take "cuda" for GPU or "cpu" for CPU. None value results in automatic detection of device type.
- oom_limit
The out of GPU memory to leave free in GB. If the free memory is below
- Type:
Optional[float | int]
- this limit, we will fall back on CPU.
- device()
Returns the torch device object.
Note
The class is a subclass of the ComputeContext class.
- distribute_workers: bool | None
The ComputeContext class is an abstract base class for defining the context in which computations are to be done. It is inherited from the built-in class ABC (Abstract Base Classes). Other classes can inherit this class to define their own specific variations of the context. It requires to implement several property methods, and also includes additional methods related to the context design.
- device
The device on which computations are to be done.
- _wrap_command(command)
Wraps a command in the context specific command.
- wrap_command(command)
Wraps a command in the context specific command and returns it.
- execute(command)
Runs a command in the context specific way.
Note
The class is abstract and requires to implement the abstract methods.
- property device
A property method that returns the torch device object. It automatically detects and uses “cuda” (GPU) if available, else it falls back on using “cpu”.
- Returns:
The torch device object.
- Return type:
torch.device