Vultr, a cloud computing platform, has launched a new serverless Inference-as-a-Service platform that offers AI-powered model deployment capabilities.

Vultr Cloud Inference offers customers scalability, lower latency and delivers cost efficiencies, the company said in the release.

Kevin Cochrane, chief marketing officer at Vultr says of the new platform that Vultr Cloud Inference provides a technology foundation with which organizations can deploy AI models globally, providing low-latency access and a consistent user experience across the world.

Vultr’s global infrastructure is powered by NVIDIA GPUs. With dedicated computer clusters available on six continents, Vultr Cloud Inference ensures that companies can comply with local data sovereignty, data residency, and privacy regulations by deploying their AI applications in regions that align with legal requirements and business objectives.

“The introduction of Vultr Cloud Inference will empower businesses to seamlessly integrate and deploy AI models trained on NVIDIA GPU infrastructure, helping them scale their AI applications globally”, said Matt McGrigg, director of global business development, cloud partners at NVIDIA.

With Vultr Cloud Inference, users can integrate and deploy their own models – regardless of the platforms on which they have been trained – into the Vultr infrastructure powered by NVIDIA GPUs.

Tags: , , , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR