Hugging Face, a leading provider of open source machine learning tools, and AWS have teamed up to increase access to artificial intelligence.
This collaboration will make Hugging Face’s state-of-the-art natural language processing (NLP) transformers and models available to AWS customers. It will therefore facilitate the development and deployment of AI applications.
By partnering with AWS, Hugging Face will be able to make its tools and expertise available to a wider audience.
The benefits are: faster training and scaling low-latency and high-throughput inference. Amazon EC2’s new Inf2 instances, powered by the latest generation of AWS Inferentia, are purpose-built to deploy the latest generation of large language and vision models and raise the performance of Inf1 by delivering up to 4x higher throughput and up to 10x lower latency.
Developers can use AWS Trainium and AWS Inferentia through managed services such as Amazon SageMaker, a service with tools and workflows for ML. Or they can self-manage on
With just a few lines of code, you can import, train, and fine-tune pre-trained NLP Transformers models such as BERT, GPT-2, RoBERTa, XLM, DistilBert, and deploy them on Amazon SageMaker.
Expectations for this partnership are mostly focused on the strong impact on the AI market, as it will enable more companies and developers to use state-of-the-art AI tools to generate unique custom solutions.