Apple released optimizations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to get started with deploying to Apple Silicon devices.

What does it mean? In fact, all these improvements enable running the Stable Diffusion text-to-image model on Apple Silicon-powered devices with the latest iOS or macOS versions – iOS 16.2 and macOS 13.1.

Stable Diffusion has been very well received by the community of artists, developers and hobbyists since its debut in 2022. The reason why is the ability to create unprecedented visual content with just a text prompt. In recognition, the community has in a matter of weeks built an extensive ecosystem of extensions and tools around this core technology. There are already methods that personalize Stable Diffusion, extend it to languages other than English, and more, thanks to open-source projects like Hugging Face diffusers.

Developers are discovering many more creative uses of Stable Diffusion such as image editing, in-picture painting, out-of-picture painting, super resolution, style transfer, and even color palette generation. With the growing number of applications, it is important to ensure that developers can use this technology effectively to create applications that creatives everywhere will be able to use.

There are a lot of reasons why on-device deployment of Stable Diffusion in an app is preferable to a server-based approach. The privacy of the end user is protected because any data the user provided as input to the model stays on the user’s device. After initial download, users don’t require an internet connection to use the model. And locally deploying this model enables developers to reduce or eliminate their server-related costs.

Getting to a compelling result with Stable Diffusion can require a lot of time and iteration, so a core challenge with on-device deployment of the model is making sure it can generate results fast enough on device. This requires executing a complex pipeline comprising 4 different neural networks totaling approximately 1.275 billion parameters.

Optimizing Core ML for Stable Diffusion and simplifying model conversion makes it easier for developers to incorporate this technology in their apps in a privacy-preserving and economically feasible way, while getting the best performance on Apple Silicon.

This release comprises a Python package for converting Stable Diffusion models from PyTorch to Core ML using diffusers and coremltools, as well as a Swift package to deploy the models. To get started, visit the Core ML Stable Diffusion code repository for detailed instructions on benchmarking and deployment.

Tags: , , , , , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR