Host Git-based models, datasets and Spaces on the Hugging Face Hub.
State-of-the-art ML for Pytorch, TensorFlow, and JAX.
State-of-the-art diffusion models for image and audio generation in PyTorch.
Access and share datasets for computer vision, audio, and NLP tasks.
Build machine learning demos and other web apps, in just a few lines of Python.
Hub Python Library
Client library for the HF Hub: manage repositories from your Python runtime.
A collection of JS libraries to interact with Hugging Face, with TS types included.
Community library to run pretrained models from Transformers in your browser.
Inference API (serverless)
Experiment with over 200k models easily using the serverless tier of Inference Endpoints.
Inference Endpoints (dedicated)
Easily deploy models to production on dedicated, fully managed infrastructure.
Parameter efficient finetuning methods for large models.
Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.
Fast training and inference of HF Transformers with easy to use hardware optimization tools.
AWS Trainium & Inferentia
Train and Deploy Transformers & Diffusers with AWS Trainium and AWS Inferentia.
Fast tokenizers, optimized for both research and production.
Evaluate and report model performance easier and more standardized.
All things about ML tasks: demos, use cases, models, datasets, and more!
API to access the contents, metadata and basic statistics of all Hugging Face Hub datasets.
Train transformer language models with reinforcement learning.
Train and Deploy Transformer models with Amazon SageMaker and Hugging Face DLCs.
State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.
Simple, safe way to store and distribute neural networks weights safely and quickly.
Text Generation Inference
Toolkit to serve Large Language Models.
AutoTrain API and UI.
Text Embeddings Inference
Toolkit to serve Text Embedding Models.
Create your own competitions on Hugging Face.