Faster Diffusion on Blackwell: MXFP8 and NVFP4 with Diffusers and TorchAO
Diffusion models for image and video generation have been surging in popularity, delivering super-realistic visual media. However, their adoption is often constrained by the sheer requirements in memory and compute. Quantization is essential for efficient serving of these models. In this post, we demonstrate reproducible end-to-end inference speedups of up to 1.26x with MXFP8 and 1.68x with NVFP4 with diffusers and torchao on the Flux.1-Dev, QwenImage, and LTX-2 models on NVIDIA B200. We also outline how we used selective quantization, CUDA Graphs, and LPIPS as a measure to iterate on the accuracy and optimal performance of these models. The code to reproduce the experiments in this post is here. Table of contents: - Background on MXPF8 and NVFP4 - Basic Usage with Diffusers and TorchAO - Benchmark Results - Technical Considerations Background on MXFP8 and NVFP4 MXFP8 and NVFP4 are…