-
Notifications
You must be signed in to change notification settings - Fork 66
Description
Hi, and thank you for maintaining this excellent package!
I would like to request (or discuss) a feature that would allow running tensor contractions with an explicit memory limit on the temporary workspace/cache used during the contraction.
Motivation
When contracting large tensors, the temporary memory allocated internally by the contraction planning step can exceed available RAM on HPC clusters or systemd-bounded jobs. In my use case (large 2D tensor network contractions for condensed-matter simulations), even a single contraction like:
@tensor A[...] = B[...] * C[...]
may allocate temporary intermediates that are larger than the tensors themselves.
I would like to constrain the contraction planner so that it only uses contraction paths whose peak memory fits within a user-specified limit (e.g., 40 GB), even if this results in a slower contraction.
Requested Feature
Add an option such as:
@tensor memory_limit=40_000_000_000 A[...] = B[...] * C[...]
or a global setting like:
TensorOperations.set_memory_limit!(4e10)
This would instruct TensorOperations to choose contraction strategies (path + temporary intermediates) under the constraint that peak temporary memory ≤ memory_limit.
Why this matters
- Many HPC jobs have strict memory limits (e.g. via systemd cgroups or schedulers).
- Current behavior sometimes causes unexpected OOM during contraction planning or execution.
- For my application (HOTRG/TNRG), even a single contraction can exceed node memory unless the algorithm is forced to use a less memory-hungry contraction order.
- Having memory-aware contraction options — even approximate or heuristic — would significantly improve usability for large-scale tensor network simulations.
Best regards,
Yonatan