Inference Endpoints (dedicated) documentation

Inference Endpoints

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Inference Endpoints

Inference Endpoints is a managed service to deploy your AI model to production. Here you’ll find quickstarts, guides, tutorials, use cases and a lot more.

Why use Inference Endpoints

Inference Endpoints makes deploying AI models to production a smooth experience. Instead of spending weeks configuring infrastructure, managing servers, and debugging deployment issues, you can focus on what matters most: your model and your users.

Our platform eliminates the complexity of AI infrastructure while providing enterprise-grade features that scale with your business needs. Whether you’re a startup launching your first AI product or an enterprise team managing hundreds of models, Inference Endpoints provides the reliability, performance, and cost-efficiency you need.

Key benefits include:

  • ⬇️ Reduce operational overhead: Eliminate the need for dedicated DevOps teams and infrastructure management, letting you focus on innovation.
  • 🚀 Scale with confidence: Handle traffic spikes automatically without worrying about capacity planning or performance degradation.
  • ⬇️ Lower total cost of ownership: Avoid the hidden costs of self-managed infrastructure including maintenance, monitoring, and security compliance.
  • 💻 Future-proof your AI stack: Stay current with the latest frameworks and optimizations without managing complex upgrades.
  • 🔥 Focus on what matters: Spend your time improving your models and building great user experiences, not managing servers.

Key Features

  • 📦 Fully managed infrastructure: you don’t need to worry about things like kubernetes, CUDA versions and configuring VPNs. Inference Endpoints deals with this under the hood so you can focus on deploying your model and serving customers as fast as possible.
  • ↕️ Autoscaling: as there’s more traffic to your model you’ll need more firepower as well. Your Inference Endpoint scales up as traffic increases and down as it decreases to save you on unnecessary compute cost.
  • 👀 Observability: understand and debug what’s going on in your model through logs & metrics.
  • 🔥 Integrated support for open-source serving framwworks: Whether you want to deploy your model with vLLM, TGI or a custom container, we got you!
  • 🤗 Seamless integration with the Hugging Face Hub: Downloading model weights fast and with the correct security policies is paramount when bringing an AI model to production. With Inference Endpoints, it’s easy and safe.

Further Reading

If you’re considering using Inference Endpoints in production, read these two case studies:

You might also find these blogs helpful:

Or try out the Quick Start!

< > Update on GitHub