The latest advancements in Stable Diffusion models, including SDXL Turbo, LCM-LoRA, and Stable Video Diffusion accelerated by NVIDIA TensorRT, are the focus of a recent blog post by writer Ayesha Asif available on NVIDIA’s technical blog.
These enhancements, introduced at the CES technology event, allow GeForce RTX GPU owners to generate images in real time, thereby saving significant time in video production. The improvements to these models also aid in more efficient and streamlined workflows.
Each year, the CES event gathers the world’s most influential tech giants to showcase their most innovative and advanced technologies.
The improvements Ayesha Asif writes about allow GeForce RTX GPU owners to generate images in real time and save the time it takes them to generate videos. Improvements to the models also help improve and facilitate workflows.
Features of the Stable Diffusion models presented at CES
The new distillation technology built into the SDXL Turbo that allows image generation in just one step helps to achieve the most advanced results possible.
Up to four images per second
NVIDIA hardware accelerated by Tensor Cores and TensorRT can create up to four images per second while enabling access to SDXL’s real-time image generation for the first time ever.
Low-Rank Adaptation (LoRA) is a learning technology for fine-tuning stable diffusion models. Combined with the Latent Coherence Model (LCM), the LoRA checkpoint provides the ability to shorten the process past the sampling required to create a stable diffusion image.
LCM-LoRA runs up to 9 times faster as it uses only four steps (compared to the traditional 50 steps). In addition to this, it is important to note that the LCM-LoRA model is accelerated by TensorRT optimizations.
Stable Video Diffusion by Stability AI is their first foundation model for generative video based on the image model Stable Diffusion. Stable Video Diffusion runs up to 40% faster with TensorRT, potentially saving up to minutes per generation.