Models
nesaorg/benchmark_v0
tencent/HunyuanVideo
black-forest-labs/FLUX.1-dev
Qwen/QwQ-32B-Preview
The Home of Machine Learning
Create, discover and collaborate on ML better.
The collaboration platform
Host and collaborate on unlimited public models, datasets and applications.
Move faster
With the HF Open source stack.
Explore all modalities
Text, image, video, audio or even 3D.
Build your portfolio
Share your work with the world and build your ML profile.
Accelerate your ML
We provide paid Compute and Enterprise solutions.
Compute
Deploy on optimized Inference Endpoints or update your Spaces applications to a GPU in a few clicks.
Starting at $0.60/hour for GPU
Enterprise
Give your team the most advanced platform to build AI with enterprise-grade security, access controls and dedicated support.
Starting at $20/user/month
More than 50,000 organizations are using Hugging Face
Ai2
EnterpriseAI at Meta
EnterpriseAmazon Web Services
Intel
Microsoft
Grammarly
Writer
EnterpriseOur Open Source
We are building the foundation of ML tooling with the community.
Transformers
State-of-the-art ML for Pytorch, TensorFlow, and JAX.
Diffusers
State-of-the-art diffusion models for image and audio generation in PyTorch.
Safetensors
Simple, safe way to store and distribute neural networks weights safely and quickly.
Hub Python Library
Client library for the HF Hub: manage repositories from your Python runtime.
Tokenizers
Fast tokenizers, optimized for both research and production.
PEFT
Parameter efficient finetuning methods for large models.
Transformers.js
State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server.
timm
State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.
TRL
Train transformer language models with reinforcement learning.
Datasets
Access and share datasets for computer vision, audio, and NLP tasks.
Text Generation Inference
Toolkit to serve Large Language Models.
Accelerate
Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.