Hugging Face Hub

free

The HF Hub is the central place to explore, experiment, collaborate and build technology with Machine Learning.
Join the open source Machine Learning movement!

Sign Up

Create with ML

Packed with ML features, like model eval, dataset viewer and much more.

Collaborate

Git based and designed for collaboration at its core.

Play and learn

Learn by experimenting and sharing with our awesome community.

Build your ML portfolio

Share your work with the world and build your own ML profile.

Spaces Hardware

Starting at $0

Spaces are one of the most popular ways to share ML applications and demos with the world.
Upgrade your Spaces with our selection of custom on-demand hardware:

Get started with Spaces
Name CPU Memory Accelerator VRAM Hourly price
CPU Basic 2 vCPU 16 GB - - FREE
CPU Upgrade 8 vCPU 32 GB - - $0.03
Nvidia T4 - small 4 vCPU 15 GB NVidia T4 16 GB $0.40
Nvidia T4 - medium 8 vCPU 30 GB NVidia T4 16 GB $0.60
1x Nvidia L4 8 vCPU 30 GB NVidia L4 24 GB $0.80
4x Nvidia L4 48 vCPU 190 GB NVidia L4 96 GB $3.80
Nvidia A10G - small 4 vCPU 15 GB NVidia A10G 24 GB $1.00
Nvidia A10G - large 12 vCPU 46 GB NVidia A10G 24 GB $1.50
2x Nvidia A10G - large 24 vCPU 92 GB NVidia A10G 48 GB $3.00
4x Nvidia A10G - large 48 vCPU 184 GB NVidia A10G 96 GB $5.00
Nvidia A100 - large 12 vCPU 142 GB NVidia A100 40 GB $4.00
Nvidia H100 24 vCPU 250 GB NVidia H100 80 GB $10.00
Custom on demand on demand on demand on demand on demand

Spaces Persistent Storage

All Spaces get ephemeral storage for free but you can upgrade and add persistent storage at any time.

Name Storage Monthly price
Small 20 GB $5
Medium 150 GB $25
Large 1 TB $100

Building something cool as a side project? We also offer community GPU grants.

Inference Endpoints

Starting at $0.033/hour

Inference Endpoints (dedicated) offers a secure production solution to easily deploy any ML model on dedicated and autoscaling infrastructure, right from the HF Hub.

Learn more

CPU instances

Provider Architecture vCPUs Memory Hourly rate
aws Intel Sapphire Rapids 1 2GB $0.03
2 4GB $0.07
4 8GB $0.13
8 16GB $0.27
aws Inferentia2 Neuron 1 14.5GB $0.75
12 760GB $12.00
azure Intel Xeon 1 2GB $0.06
2 4GB $0.12
4 8GB $0.24
8 16GB $0.48
gcp Intel Sapphire Rapids 1 2GB $0.07
2 4GB $0.14
4 8GB $0.28
8 16GB $0.56

GPU instances

Provider Architecture GPUs GPU Memory Hourly rate
aws NVIDIA T4 1 14GB $0.50
4 56GB $3.00
aws NVIDIA L4 1 24GB $0.80
4 96GB $3.80
aws NVIDIA A10G 1 24GB $1.00
4 96GB $5.00
aws NVIDIA A100 1 80GB $4.00
2 160GB $8.00
4 320GB $16.00
8 640GB $32.00
gcp NVIDIA T4 1 16GB $0.50
gcp NVIDIA L4 1 24GB $1.00
4 96GB $5.00
gcp NVIDIA A100 1 80GB $6.00
2 160GB $12.00
4 320GB $24.00
8 640GB $48.00
gcp NVIDIA H100 1 80GB $12.50
2 160GB $25.00
4 320GB $50.00
8 640GB $100.00

Pro Account

PRO

A monthly subscription to access powerful features.

Get Pro
  • ZeroGPU: Use distributed A100 hardware on your Spaces

  • Dev Mode: Faster iteration cycles with SSH/VS Code support for Spaces

  • Inference API: Get higher rate limits for serverless inference

  • Dataset Viewer: Activate it on private datasets

  • Social Posts: Share short updates with the community

  • Blog Articles: Publish articles to the Hugging Face blog

  • Features Preview: Get early access to upcoming features

  • PRO Badge: Show your support on your profile