PyTorch 2.12 Release Blog
Featured projects We are excited to announce the release of PyTorch® 2.12 (release notes)! The PyTorch 2.12 release features the following changes: - Batched linalg.eigh on CUDA is up to 100x faster due to updated cuSolver backend selection - New torch.accelerator.Graph API unifies graph capture and replay across CUDA, XPU, and out-of-tree backends torch.export.save now supports Microscaling (MX) quantization formats, enabling full export of aggressively compressed models- Adagrad now supports fused=True , joining Adam, AdamW, and SGD with a single-kernel optimizer implementation torch.cond control flow can now be captured and replayed inside CUDA Graphs- ROCm users gain expandable memory segments, rocSHMEM symmetric memory collectives, and FlexAttention pipelining This release is composed of 2,926 commits from 457 contributors since PyTorch 2.11. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out…