Propagator Options¶
This page documents the dataclass-based options used by the Torch-family propagator interface.
Implementation:
src/sweep/propagator/options.py
These option blocks are primarily used through:
PropTorch(..., eager_options=...)PropTorch(..., cuda_options=...)PropTorch(..., backend_options=...)
Checkpoint note:
- eager checkpointing still uses the top-level
use_ckptandckpt_chunksarguments onPropTorch - CUDA checkpointing should be configured through
CUDAOptions(memory=MemoryOptions(strategy="ckpt", ckpt=CkptOptions(...)))
EagerOptions¶
@dataclass
class EagerOptions:
use_compile: bool = False
compile_mode: str = "default"
compile_dynamic: bool = False
compile_backend: str | None = None
compile_fullgraph: bool = False
store_last_wavefield: bool = False
Use this block with:
PropTorch(..., backend="eager", eager_options=EagerOptions(...))
This block controls eager compile/runtime behavior. It does not currently
replace the top-level eager checkpoint arguments use_ckpt and ckpt_chunks.
Fields:
use_compile: enablestorch.compileon the eager backendcompile_mode: compile mode passed intotorch.compilecompile_dynamic: whether dynamic-shape behavior is allowed in the compiled graphcompile_backend: optional backend argument passed intotorch.compilecompile_fullgraph: whether to request full-graph compilationstore_last_wavefield: whether to keep the last wavefield state for inspection/debugging
CUDAOptions¶
@dataclass
class CUDAOptions:
memory: MemoryOptions | None = None
Use this block with:
PropTorch(..., backend="cuda", cuda_options=CUDAOptions(...))
Fields:
memory: CUDA memory-policy block, described in MemoryOptions
MemoryOptions¶
@dataclass
class MemoryOptions:
strategy: Literal["boundary", "ckpt"] | None = None
boundary: BoundaryOptions | None = None
ckpt: CkptOptions | None = None
This block chooses one CUDA memory-saving strategy.
Rules:
- if
strategy="boundary", you must provideboundary=BoundaryOptions(...) - if
strategy="ckpt", you must provideckpt=CkptOptions(...) boundaryandckptcannot be used at the same time- if
strategy=None, neitherboundarynorckptmay be provided
Typical usage:
CUDAOptions(
memory=MemoryOptions(
strategy="boundary",
boundary=BoundaryOptions(storage="gpu"),
)
)
or:
CUDAOptions(
memory=MemoryOptions(
strategy="ckpt",
ckpt=CkptOptions(mode="chunk", chunks=100),
)
)
BoundaryOptions¶
@dataclass
class BoundaryOptions:
storage: Literal["gpu", "cpu"] = "gpu"
transfer_interval: int = 1
pinned_memory: bool = False
This block controls CUDA boundary saving.
Fields:
storage:"gpu"keeps saved boundaries on device, and"cpu"stages saved boundaries on host memorytransfer_interval: only meaningful whenstorage="cpu"; controls how often boundary values are transferred or stagedpinned_memory: only meaningful whenstorage="cpu"; enables pinned host memory for transfers
Validation rules:
transfer_interval >= 1- if
storage="gpu", thentransfer_intervalmust stay at1andpinned_memorymust stayFalse
CkptOptions¶
@dataclass
class CkptOptions:
mode: Literal["chunk", "recursive"] = "chunk"
chunks: int = 100
count: int = 0
This block controls CUDA checkpointing.
Fields:
mode:"chunk"means periodic chunk-based replay, and"recursive"means fixed checkpoint-budget replaychunks: used only whenmode="chunk"count: used only whenmode="recursive"
Validation rules:
- for
mode="chunk",chunks >= 1andcountmust remain0 - for
mode="recursive",count >= 1andchunksmust remain at its default chunk-mode value
backend_options¶
backend_options is the generic merged options block accepted by PropTorch.
It can be used when you want to pass backend-specific fields without choosing
between eager_options and cuda_options in the call site.
Example:
PropTorch(
...,
backend="eager",
backend_options=EagerOptions(use_compile=True),
)
or:
PropTorch(
...,
backend="cuda",
backend_options=CUDAOptions(
memory=MemoryOptions(
strategy="boundary",
boundary=BoundaryOptions(storage="gpu"),
)
),
)
In most user-facing code, eager_options and cuda_options are clearer than
backend_options, because they make the backend split explicit.