Rate this Page

MemPool#

class torch.cuda.memory.MemPool(*args, **kwargs)[source]#

MemPool represents a pool of memory in a caching allocator. Currently, it’s just the ID of the pool object maintained in the CUDACachingAllocator.

Parameters
  • allocator (torch._C._cuda_CUDAAllocator, optional) – a torch._C._cuda_CUDAAllocator object that can be used to define how memory gets allocated in the pool. If allocator is None (default), memory allocation follows the default/ current configuration of the CUDACachingAllocator.

  • use_on_oom (bool) – a bool that indicates if this pool can be used as a last resort if a memory allocation outside of the pool fails due to Out Of Memory. This is False by default.

property allocator: Optional[_cuda_CUDAAllocator]#

Returns the allocator this MemPool routes allocations to.

property id: tuple[int, int]#

Returns the ID of this pool as a tuple of two ints.

snapshot()[source]#

Return a snapshot of the CUDA memory allocator pool state across all devices.

Interpreting the output of this function requires familiarity with the memory allocator internals.

Note

See Memory management for more details about GPU memory management.

use_count()[source]#

Returns the reference count of this pool.

Return type

int

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources
Morty Proxy This is a proxified and sanitized view of the page, visit original site.