Private Hub:
The GitHub of Machine Learning
Experiment, collaborate, train and serve state-of-the-art models with your own private and secure Hugging Face Hub. We have built the most advanced platform to accelerate your ML roadmap!

A Complete Platform for Machine Learning
Hugging Face’s complete ecosystem in your private, compliant environment
1. Experiment
Leverage +55,000 models and +6,000 datasets publicly available in our Hub. Test different architectures such as BERT, DistilBERT and T5 and quickly evaluate which one works best for your use case.

2. Collaborate Privately
Publish custom models, datasets and spaces on your private hub, and make it easy for other teams to discover them and use them in their projects. Role-based access control, pull requests, discussions, model cards and versioning are built-in.

3. Train models
Automatically train, evaluate and deploy state-of-the-art models with AutoTrain. From multi-class classification to regression, entity recognition, summarization, and more, we got you covered!

4. Demo your work
Easily host a demo app to show your machine learning work with Spaces. Get feedback early from your proof of concepts by allowing stakeholders to run your MVPs directly from their browsers.

5. Deploy & Serve
Data scientists don't need to talk to another team to deploy their models to production; they just use API requests to run these models at scale, in real-time.

How Does It Work?
A world-class toolkit to help you move faster in your ML journey
Collaborate across teams and build upon your colleagues’ work with a centralized model and dataset repository. Share private models and define who can access them with role-based access control. Create model cards to improve searchability. Have convenient model versioning in case mistakes are made. Improve collaboration in machine learning with pull requests and discussions.
Training state-of-the-art models has never been easier! Use our no-code user interface to upload a CSV and get state of art models, automatically fine-tuned, evaluated and deployed!
With Spaces, you can build Machine Learning applications in a few lines of Python using Streamlit or Gradio, to build great demos of your models!
Serve your models without asking another team for resources. Just add your API token, select your model ID and specify data for inference! Need super low latency? Use Infinity to achieve millisecond latency on your own infrastructure.
import requests
def query(payload, model_id, api_token):
headers = {"Authorization": f"Bearer {api_token}"}
API_URL = f"https://api-inference.huggingface.co/models/{model_id}"
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
model_id = "distilbert-base-uncased"
api_token = "api_XXXXXXXX" # get yours at hf.co/settings/tokens
data = query("The goal of life is [MASK].", model_id, api_token)
Collaborate with Pull Requests and Discussions
A central place for feedback and iterations in machine learning
Our collaborative features radically improve the machine learning workflow. Now you can leverage pull requests and discussions to support peer reviews on models, datasets, and spaces, improve collaboration across teams and accelerate your machine learning roadmap.
Build with Hugging Face and Enterprise Security
A secure link to the open source development
Enable teams in regulated environments to frictionlessly keep up with the pace of open source advancement. The Private Hub runs in your own compliant server. It provides enterprise security features like security scans, audit trail, SSO, and control access to keep your models and data secure.

Compliance & Certifications

GDPR Compliant

SOC 2 Type 1
Deploy Your Way
Flexible deployment options for your private Hugging Face Hub

Managed Private Hub (SaaS)
Runs in segregated virtual private servers (VPCs) owned by Hugging Face. You can enjoy the full Hugging Face experience on your own private Hub without having to manage any infrastructure.

On-cloud Private Hub
Runs in a cloud account on AWS, Azure or GCP owned by the customer. This deployment option gives you full administrative control on the underlying cloud infrastructure and lets you achieve a stronger security and compliance.

On-prem Private Hub
On-premise deployment of the Hugging Face Hub on your own infrastructure. For customers with strict compliance rules and/or workloads where they don't want or are not allowed to run on a public cloud.
A Better Way to Work in Machine Learning
Bridging the gap from research to production
Before
❌
Models and datasets aren't shared internally, no collaboration across teams.
😓
Similar models are built from scratch across teams all the time.
🐢
Unfamiliar tools and non-standard workflows slow down ML development.
🤼
Waste time on Docker/Kubernetes and optimizing models for production.
After
✅
Share private models and datasets to collaborate within and across teams.
🤝
Model reusability across teams. Wheels don't need to be reinvented again.
🚀
Familiar tools and standardized workflows accelerate your ML roadmap.
💪
Don't worry about deployment, spend more time building models.
Why The Private Hub?
A secure, collaborative environment built for accelerating your machine learning roadmap
Build faster
- Leverage the Hugging Face ecosystem to accelerate your delivery.
Seamless Collaboration
- Share private models, collaborate with version control, user access, pull requests and discussions.
Standardization
- Standardize the process to load, fine-tune, customize, and deploy models.
Less Open Source Friction
- Keep open source components secure and compliant.
Production Ready
- We take care of models' performance and reliability at scale.
Integrated Workflows
- Empower data scientists to own end-to-end model development lifecycle.