url
stringlengths 23
7.17k
| text
stringlengths 0
1.65M
|
---|---|
https://huggingface.co/ | The Home of Machine Learning
Create, discover and collaborate on ML better.
Accelerate your ML
We provide paid Compute and Enterprise solutions.
Enterprise
Give your team the most advanced platform to build AI with enterprise-grade security, access controls and dedicated support.
More than 50,000 organizations are using Hugging Face
Our Open Source
We are building the foundation of ML tooling with the community. |
https://huggingface.co/amazon | Hugging Face is working with Amazon Web Services to make it easier than ever for startups and enterprises to train and deploy Hugging Face models in Amazon SageMaker.
To train Hugging Face models in Amazon SageMaker, you can use the Hugging Face Deep Learning Containers (DLCs) and the Hugging Face support in the SageMaker Python SDK.
The DLCs are fully integrated with the SageMaker distributed training libraries to train models more quickly using the latest generation of accelerated computing instances available on Amazon EC2. With the SageMaker Python SDK, you can start training with just a single line of code, enabling your teams to move from idea to production more quickly.
To deploy Hugging Face models in Amazon SageMaker, you can use the Hugging Face Deep Learning Containers with the new Hugging Face Inference Toolkit.
With the new Hugging Face Inference DLCs, deploy your trained models for inference with just one more line of code, or select any of the 10,000+ models publicly available on the 🤗 Hub, and deploy them with Amazon SageMaker, to easily create production-ready endpoints that scale seamlessly, with built-in monitoring and enterprise-level security.
More information: AWS blog post, Community Forum |
https://huggingface.co/google | ALBERT release
The ALBERT release was done in two steps, over 4 checkpoints of different sizes each time. The first version is noted as "v1", the second as "v2". |
https://huggingface.co/Intel | Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers. |
https://huggingface.co/microsoft | Research interests
None defined yet.
Collections 1
SpeechT5
The SpeechT5 framework consists of a shared seq2seq and six modal-specific (speech/text) pre/post-nets that can address a few audio-related tasks.
SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing
Paper • 2110.07205 • Published Oct 14, 2021 • 1
microsoft/speecht5_tts
Text-to-Speech • Updated Aug 25 • 53.2k • 273
Running ont4
182
👩🎤
SpeechT5 Speech Synthesis Demo
microsoft/speecht5_vc
Audio-to-Audio • Updated Mar 22 • 16.3k • 36
spaces 9
Building ona10g
2.03k
😻
HuggingGPT
Runtime error
719
🎨
Visual Chatgpt
Build error
55
🤖
ChatGPT Robotics
232
🌍
Promptist
Build error
57
🐠
GODEL Demo
Runtime error
17
🏢
Unicl Image Recognition Demo
models 254
microsoft/phi-1
Text Generation • Updated 6 days ago • 7.07k • 91
microsoft/phi-1_5
Text Generation • Updated 6 days ago • 168k • 869
microsoft/cvt-13
Image Classification • Updated 16 days ago • 6.99k • 5
microsoft/prophetnet-large-uncased-squad-qg
Text2Text Generation • Updated 16 days ago • 462 • 6
microsoft/swin-tiny-patch4-window7-224
Image Classification • Updated 18 days ago • 14k • 17
microsoft/tapex-large-sql-execution
Table Question Answering • Updated 18 days ago • 4.45k • 11
microsoft/git-base-vatex
Text Generation • Updated 18 days ago • 1.05k • 1
microsoft/mpnet-base
Fill-Mask • Updated 18 days ago • 140k • 20
microsoft/xclip-base-patch16-zero-shot
Feature Extraction • Updated 21 days ago • 21.2k • 15
microsoft/swin-base-patch4-window7-224
Image Classification • Updated 23 days ago • 8.47k • 3
datasets 5
microsoft/LCC_python
Viewer • Updated Jun 21 • 10 • 1
microsoft/LCC_java
Viewer • Updated Jun 21 • 3 • 1
microsoft/LCC_csharp
Viewer • Updated Jun 21 • 9 • 2
microsoft/CLUES
Viewer • Updated Mar 25, 2022 • 2 • 3
microsoft/codexglue_method_generation
Preview • Updated Oct 28, 2021 • 6 |
https://huggingface.co/grammarly | Research interests
None defined yet.
models 6
grammarly/coedit-large
Text2Text Generation • Updated 14 days ago • 21.1k • 26
grammarly/pseudonymization-seq2seq
Text2Text Generation • Updated Aug 31 • 4
grammarly/coedit-xxl
Text2Text Generation • Updated Aug 19 • 157 • 10
grammarly/coedit-xl-composite
Text2Text Generation • Updated Aug 19 • 19 • 8
grammarly/coedit-xl
Text2Text Generation • Updated Aug 19 • 456 • 4
grammarly/detexd-roberta-base
Text Classification • Updated Jul 10 • 131 • 4
datasets 3
grammarly/pseudonymization-data
Preview • Updated Aug 23 • 9 • 1
grammarly/coedit
Viewer • Updated Aug 19 • 286 • 9
grammarly/detexd-benchmark
Viewer • Updated Jul 10 • 11 • 1 |
https://huggingface.co/Writer | Writer is a generative AI platform focused on advancing AI technology by solving the problems faced by businesses. We are making LLMs accessible to everyone with the availability of our Palmyra LLMs on Hugging Face and our API. You can run these models in your own, secure environment and fine-tune them for your needs while protecting your data.
spaces 3 |
https://huggingface.co/docs/transformers | 🤗 Transformers
State-of-the-art Machine Learning for PyTorch, TensorFlow, and JAX.
🤗 Transformers provides APIs and tools to easily download and train state-of-the-art pretrained models. Using pretrained models can reduce your compute costs, carbon footprint, and save you the time and resources required to train a model from scratch. These models support common tasks in different modalities, such as:
📝 Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
🖼️ Computer Vision: image classification, object detection, and segmentation.
🗣️ Audio: automatic speech recognition and audio classification.
🐙 Multimodal: table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
🤗 Transformers support framework interoperability between PyTorch, TensorFlow, and JAX. This provides the flexibility to use a different framework at each stage of a model’s life; train a model in three lines of code in one framework, and load it for inference in another. Models can also be exported to a format like ONNX and TorchScript for deployment in production environments.
Join the growing community on the Hub, forum, or Discord today!
If you are looking for custom support from the Hugging Face team
Contents
The documentation is organized into five sections:
GET STARTED provides a quick tour of the library and installation instructions to get up and running.
TUTORIALS are a great place to start if you’re a beginner. This section will help you gain the basic skills you need to start using the library.
HOW-TO GUIDES show you how to achieve a specific goal, like finetuning a pretrained model for language modeling or how to write and share a custom model.
CONCEPTUAL GUIDES offers more discussion and explanation of the underlying concepts and ideas behind models, tasks, and the design philosophy of 🤗 Transformers.
API describes all classes and functions:
MAIN CLASSES details the most important classes like configuration, model, tokenizer, and pipeline.
MODELS details the classes and functions related to each model implemented in the library.
INTERNAL HELPERS details utility classes and functions used internally.
Supported models
ALBERT (from Google Research and the Toyota Technological Institute at Chicago) released with the paper ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
ALIGN (from Google Research) released with the paper Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
AltCLIP (from BAAI) released with the paper AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.
Audio Spectrogram Transformer (from MIT) released with the paper AST: Audio Spectrogram Transformer by Yuan Gong, Yu-An Chung, James Glass.
Autoformer (from Tsinghua University) released with the paper Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
Bark (from Suno) released in the repository suno-ai/bark by Suno AI team.
BART (from Facebook) released with the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
BARThez (from École polytechnique) released with the paper BARThez: a Skilled Pretrained French Sequence-to-Sequence Model by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
BARTpho (from VinAI Research) released with the paper BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
BEiT (from Microsoft) released with the paper BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong, Furu Wei.
BERT (from Google) released with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
BERT For Sequence Generation (from Google) released with the paper Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
BERTweet (from VinAI Research) released with the paper BERTweet: A pre-trained language model for English Tweets by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
BigBird-Pegasus (from Google Research) released with the paper Big Bird: Transformers for Longer Sequences by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
BigBird-RoBERTa (from Google Research) released with the paper Big Bird: Transformers for Longer Sequences by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
BioGpt (from Microsoft Research AI4Science) released with the paper BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu.
BiT (from Google AI) released with the paper Big Transfer (BiT): General Visual Representation Learning by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
Blenderbot (from Facebook) released with the paper Recipes for building an open-domain chatbot by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
BlenderbotSmall (from Facebook) released with the paper Recipes for building an open-domain chatbot by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
BLIP (from Salesforce) released with the paper BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
BLIP-2 (from Salesforce) released with the paper BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.
BLOOM (from BigScience workshop) released by the BigScience Workshop.
BORT (from Alexa) released with the paper Optimal Subarchitecture Extraction For BERT by Adrian de Wynter and Daniel J. Perry.
BridgeTower (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
BROS (from NAVER CLOVA) released with the paper BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents by Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park.
ByT5 (from Google Research) released with the paper ByT5: Towards a token-free future with pre-trained byte-to-byte models by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
CamemBERT (from Inria/Facebook/Sorbonne) released with the paper CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
CANINE (from Google Research) released with the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
Chinese-CLIP (from OFA-Sys) released with the paper Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
CLAP (from LAION-AI) released with the paper Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
CLIP (from OpenAI) released with the paper Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
CLIPSeg (from University of Göttingen) released with the paper Image Segmentation Using Text and Image Prompts by Timo Lüddecke and Alexander Ecker.
CodeGen (from Salesforce) released with the paper A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
CodeLlama (from MetaAI) released with the paper Code Llama: Open Foundation Models for Code by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
Conditional DETR (from Microsoft Research Asia) released with the paper Conditional DETR for Fast Training Convergence by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
ConvBERT (from YituTech) released with the paper ConvBERT: Improving BERT with Span-based Dynamic Convolution by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
ConvNeXT (from Facebook AI) released with the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
ConvNeXTV2 (from Facebook AI) released with the paper ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
CPM (from Tsinghua University) released with the paper CPM: A Large-scale Generative Chinese Pre-trained Language Model by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
CPM-Ant (from OpenBMB) released by the OpenBMB.
CTRL (from Salesforce) released with the paper CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong and Richard Socher.
CvT (from Microsoft) released with the paper CvT: Introducing Convolutions to Vision Transformers by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
Data2Vec (from Facebook) released with the paper Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
DeBERTa (from Microsoft) released with the paper DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
DeBERTa-v2 (from Microsoft) released with the paper DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
Decision Transformer (from Berkeley/Facebook/Google) released with the paper Decision Transformer: Reinforcement Learning via Sequence Modeling by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
Deformable DETR (from SenseTime Research) released with the paper Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
DeiT (from Facebook) released with the paper Training data-efficient image transformers & distillation through attention by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
DePlot (from Google AI) released with the paper DePlot: One-shot visual language reasoning by plot-to-table translation by Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
DETA (from The University of Texas at Austin) released with the paper NMS Strikes Back by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
DETR (from Facebook) released with the paper End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
DialoGPT (from Microsoft Research) released with the paper DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
DiNAT (from SHI Labs) released with the paper Dilated Neighborhood Attention Transformer by Ali Hassani and Humphrey Shi.
DINOv2 (from Meta AI) released with the paper DINOv2: Learning Robust Visual Features without Supervision by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski.
DistilBERT (from HuggingFace), released together with the paper DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into DistilGPT2, RoBERTa into DistilRoBERTa, Multilingual BERT into DistilmBERT and a German version of DistilBERT.
DiT (from Microsoft Research) released with the paper DiT: Self-supervised Pre-training for Document Image Transformer by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
Donut (from NAVER), released together with the paper OCR-free Document Understanding Transformer by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
DPR (from Facebook) released with the paper Dense Passage Retrieval for Open-Domain Question Answering by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
DPT (from Intel Labs) released with the paper Vision Transformers for Dense Prediction by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
EfficientFormer (from Snap Research) released with the paper EfficientFormer: Vision Transformers at MobileNetSpeed by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren.
EfficientNet (from Google Brain) released with the paper EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks by Mingxing Tan, Quoc V. Le.
ELECTRA (from Google Research/Stanford University) released with the paper ELECTRA: Pre-training text encoders as discriminators rather than generators by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
EnCodec (from Meta AI) released with the paper High Fidelity Neural Audio Compression by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
EncoderDecoder (from Google Research) released with the paper Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
ERNIE (from Baidu) released with the paper ERNIE: Enhanced Representation through Knowledge Integration by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu.
ErnieM (from Baidu) released with the paper ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang.
ESM (from Meta AI) are transformer protein language models. ESM-1b was released with the paper Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. ESM-1v was released with the paper Language models enable zero-shot prediction of the effects of mutations on protein function by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. ESM-2 and ESMFold were released with the paper Language models of protein sequences at the scale of evolution enable accurate structure prediction by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
Falcon (from Technology Innovation Institute) by Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme.
FLAN-T5 (from Google AI) released in the repository google-research/t5x by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
FLAN-UL2 (from Google AI) released in the repository google-research/t5x by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
FlauBERT (from CNRS) released with the paper FlauBERT: Unsupervised Language Model Pre-training for French by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
FLAVA (from Facebook AI) released with the paper FLAVA: A Foundational Language And Vision Alignment Model by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
FNet (from Google Research) released with the paper FNet: Mixing Tokens with Fourier Transforms by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
FocalNet (from Microsoft Research) released with the paper Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
Funnel Transformer (from CMU/Google Brain) released with the paper Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
GIT (from Microsoft Research) released with the paper GIT: A Generative Image-to-text Transformer for Vision and Language by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
GLPN (from KAIST) released with the paper Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
GPT (from OpenAI) released with the paper Improving Language Understanding by Generative Pre-Training by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
GPT Neo (from EleutherAI) released in the repository EleutherAI/gpt-neo by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
GPT NeoX (from EleutherAI) released with the paper GPT-NeoX-20B: An Open-Source Autoregressive Language Model by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
GPT NeoX Japanese (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori.
GPT-2 (from OpenAI) released with the paper Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodeiand Ilya Sutskever.
GPT-J (from EleutherAI) released in the repository kingoflolz/mesh-transformer-jax by Ben Wang and Aran Komatsuzaki.
GPT-Sw3 (from AI-Sweden) released with the paper Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
GPTBigCode (from BigCode) released with the paper SantaCoder: don’t reach for the stars! by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
GPTSAN-japanese released in the repository tanreinama/GPTSAN by Toshiyuki Sakamoto(tanreinama).
Graphormer (from Microsoft) released with the paper Do Transformers Really Perform Bad for Graph Representation? by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
GroupViT (from UCSD, NVIDIA) released with the paper GroupViT: Semantic Segmentation Emerges from Text Supervision by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
HerBERT (from Allegro.pl, AGH University of Science and Technology) released with the paper KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
Hubert (from Facebook) released with the paper HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
I-BERT (from Berkeley) released with the paper I-BERT: Integer-only BERT Quantization by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
IDEFICS (from HuggingFace) released with the paper OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh.
ImageGPT (from OpenAI) released with the paper Generative Pretraining from Pixels by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
Informer (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
InstructBLIP (from Salesforce) released with the paper InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
Jukebox (from OpenAI) released with the paper Jukebox: A Generative Model for Music by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.
LayoutLM (from Microsoft Research Asia) released with the paper LayoutLM: Pre-training of Text and Layout for Document Image Understanding by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
LayoutLMv2 (from Microsoft Research Asia) released with the paper LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
LayoutLMv3 (from Microsoft Research Asia) released with the paper LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
LayoutXLM (from Microsoft Research Asia) released with the paper LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
LED (from AllenAI) released with the paper Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan.
LeViT (from Meta AI) released with the paper LeViT: A Vision Transformer in ConvNet’s Clothing for Faster Inference by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
LiLT (from South China University of Technology) released with the paper LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding by Jiapeng Wang, Lianwen Jin, Kai Ding.
LLaMA (from The FAIR team of Meta AI) released with the paper LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
Llama2 (from The FAIR team of Meta AI) released with the paper Llama2: Open Foundation and Fine-Tuned Chat Models by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom.
Longformer (from AllenAI) released with the paper Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan.
LongT5 (from Google AI) released with the paper LongT5: Efficient Text-To-Text Transformer for Long Sequences by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
LUKE (from Studio Ousia) released with the paper LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
LXMERT (from UNC Chapel Hill) released with the paper LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering by Hao Tan and Mohit Bansal.
M-CTC-T (from Facebook) released with the paper Pseudo-Labeling For Massively Multilingual Speech Recognition by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
M2M100 (from Facebook) released with the paper Beyond English-Centric Multilingual Machine Translation by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
MarianMT Machine translation models trained using OPUS data by Jörg Tiedemann. The Marian Framework is being developed by the Microsoft Translator Team.
MarkupLM (from Microsoft Research Asia) released with the paper MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei.
Mask2Former (from FAIR and UIUC) released with the paper Masked-attention Mask Transformer for Universal Image Segmentation by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.
MaskFormer (from Meta and UIUC) released with the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
MatCha (from Google AI) released with the paper MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering by Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.
mBART (from Facebook) released with the paper Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
mBART-50 (from Facebook) released with the paper Multilingual Translation with Extensible Multilingual Pretraining and Finetuning by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
MEGA (from Meta/USC/CMU/SJTU) released with the paper Mega: Moving Average Equipped Gated Attention by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.
Megatron-BERT (from NVIDIA) released with the paper Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
Megatron-GPT2 (from NVIDIA) released with the paper Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
MGP-STR (from Alibaba Research) released with the paper Multi-Granularity Prediction for Scene Text Recognition by Peng Wang, Cheng Da, and Cong Yao.
Mistral (from Mistral AI) by The Mistral AI team: Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
mLUKE (from Studio Ousia) released with the paper mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
MMS (from Facebook) released with the paper Scaling Speech Technology to 1,000+ Languages by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.
MobileBERT (from CMU/Google Brain) released with the paper MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
MobileNetV1 (from Google Inc.) released with the paper MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
MobileNetV2 (from Google Inc.) released with the paper MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
MobileViT (from Apple) released with the paper MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta and Mohammad Rastegari.
MobileViTV2 (from Apple) released with the paper Separable Self-attention for Mobile Vision Transformers by Sachin Mehta and Mohammad Rastegari.
MPNet (from Microsoft Research) released with the paper MPNet: Masked and Permuted Pre-training for Language Understanding by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
MPT (from MosaiML) released with the repository llm-foundry by the MosaicML NLP Team.
MRA (from the University of Wisconsin - Madison) released with the paper Multi Resolution Analysis (MRA) for Approximate Self-Attention by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, Vikas Singh.
MT5 (from Google AI) released with the paper mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
MusicGen (from Meta) released with the paper Simple and Controllable Music Generation by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez.
MVP (from RUC AI Box) released with the paper MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
NAT (from SHI Labs) released with the paper Neighborhood Attention Transformer by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
Nezha (from Huawei Noah’s Ark Lab) released with the paper NEZHA: Neural Contextualized Representation for Chinese Language Understanding by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
NLLB (from Meta) released with the paper No Language Left Behind: Scaling Human-Centered Machine Translation by the NLLB team.
NLLB-MOE (from Meta) released with the paper No Language Left Behind: Scaling Human-Centered Machine Translation by the NLLB team.
Nougat (from Meta AI) released with the paper Nougat: Neural Optical Understanding for Academic Documents by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic.
Nyströmformer (from the University of Wisconsin - Madison) released with the paper Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
OneFormer (from SHI Labs) released with the paper OneFormer: One Transformer to Rule Universal Image Segmentation by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi.
OpenLlama (from s-JoL) released in Open-Llama.
OPT (from Meta AI) released with the paper OPT: Open Pre-trained Transformer Language Models by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
OWL-ViT (from Google AI) released with the paper Simple Open-Vocabulary Object Detection with Vision Transformers by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
Pegasus (from Google) released with the paper PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
PEGASUS-X (from Google) released with the paper Investigating Efficiently Extending Transformers for Long Input Summarization by Jason Phang, Yao Zhao, and Peter J. Liu.
Perceiver IO (from Deepmind) released with the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
Persimmon (from ADEPT) released in a blog post by Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani.
PhoBERT (from VinAI Research) released with the paper PhoBERT: Pre-trained language models for Vietnamese by Dat Quoc Nguyen and Anh Tuan Nguyen.
Pix2Struct (from Google) released with the paper Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
PLBart (from UCLA NLP) released with the paper Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
PoolFormer (from Sea AI Labs) released with the paper MetaFormer is Actually What You Need for Vision by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
Pop2Piano released with the paper Pop2Piano : Pop Audio-based Piano Cover Generation by Jongho Choi and Kyogu Lee.
ProphetNet (from Microsoft Research) released with the paper ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
PVT (from Nanjing University, The University of Hong Kong etc.) released with the paper Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao.
QDQBert (from NVIDIA) released with the paper Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
RAG (from Facebook) released with the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
REALM (from Google Research) released with the paper REALM: Retrieval-Augmented Language Model Pre-Training by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
Reformer (from Google Research) released with the paper Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
RegNet (from META Platforms) released with the paper Designing Network Design Space by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
RemBERT (from Google Research) released with the paper Rethinking embedding coupling in pre-trained language models by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
ResNet (from Microsoft Research) released with the paper Deep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
RoBERTa (from Facebook), released together with the paper RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
RoBERTa-PreLayerNorm (from Facebook) released with the paper fairseq: A Fast, Extensible Toolkit for Sequence Modeling by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
RoCBert (from WeChatAI) released with the paper RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
RoFormer (from ZhuiyiTechnology), released together with the paper RoFormer: Enhanced Transformer with Rotary Position Embedding by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
RWKV (from Bo Peng), released on this repo by Bo Peng.
SegFormer (from NVIDIA) released with the paper SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
Segment Anything (from Meta AI) released with the paper Segment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
SEW (from ASAPP) released with the paper Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
SEW-D (from ASAPP) released with the paper Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
SpeechT5 (from Microsoft Research) released with the paper SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
SpeechToTextTransformer (from Facebook), released together with the paper fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
SpeechToTextTransformer2 (from Facebook), released together with the paper Large-Scale Self- and Semi-Supervised Learning for Speech Translation by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
Splinter (from Tel Aviv University), released together with the paper Few-Shot Question Answering by Pretraining Span Selection by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
SqueezeBERT (from Berkeley) released with the paper SqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
SwiftFormer (from MBZUAI) released with the paper SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
Swin Transformer (from Microsoft) released with the paper Swin Transformer: Hierarchical Vision Transformer using Shifted Windows by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
Swin Transformer V2 (from Microsoft) released with the paper Swin Transformer V2: Scaling Up Capacity and Resolution by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
Swin2SR (from University of Würzburg) released with the paper Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
SwitchTransformers (from Google) released with the paper Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, Noam Shazeer.
T5 (from Google AI) released with the paper Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
T5v1.1 (from Google AI) released in the repository google-research/text-to-text-transfer-transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
Table Transformer (from Microsoft Research) released with the paper PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents by Brandon Smock, Rohith Pesala, Robin Abraham.
TAPAS (from Google AI) released with the paper TAPAS: Weakly Supervised Table Parsing via Pre-training by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
TAPEX (from Microsoft Research) released with the paper TAPEX: Table Pre-training via Learning a Neural SQL Executor by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
Time Series Transformer (from HuggingFace).
TimeSformer (from Facebook) released with the paper Is Space-Time Attention All You Need for Video Understanding? by Gedas Bertasius, Heng Wang, Lorenzo Torresani.
Trajectory Transformer (from the University of California at Berkeley) released with the paper Offline Reinforcement Learning as One Big Sequence Modeling Problem by Michael Janner, Qiyang Li, Sergey Levine
Transformer-XL (from Google/CMU) released with the paper Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
TrOCR (from Microsoft), released together with the paper TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
TVLT (from UNC Chapel Hill) released with the paper TVLT: Textless Vision-Language Transformer by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal.
UL2 (from Google Research) released with the paper Unifying Language Learning Paradigms by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
UMT5 (from Google Research) released with the paper UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant.
UniSpeech (from Microsoft Research) released with the paper UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
UniSpeechSat (from Microsoft Research) released with the paper UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
UPerNet (from Peking University) released with the paper Unified Perceptual Parsing for Scene Understanding by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.
VAN (from Tsinghua University and Nankai University) released with the paper Visual Attention Network by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
VideoMAE (from Multimedia Computing Group, Nanjing University) released with the paper VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
ViLT (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Wonjae Kim, Bokyung Son, Ildoo Kim.
Vision Transformer (ViT) (from Google AI) released with the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
VisualBERT (from UCLA NLP) released with the paper VisualBERT: A Simple and Performant Baseline for Vision and Language by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
ViT Hybrid (from Google AI) released with the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
VitDet (from Meta AI) released with the paper Exploring Plain Vision Transformer Backbones for Object Detection by Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He.
ViTMAE (from Meta AI) released with the paper Masked Autoencoders Are Scalable Vision Learners by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
ViTMatte (from HUST-VL) rreleased with the paper ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.
ViTMSN (from Meta AI) released with the paper Masked Siamese Networks for Label-Efficient Learning by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas.
VITS (from Kakao Enterprise) released with the paper Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech by Jaehyeon Kim, Jungil Kong, Juhee Son.
ViViT (from Google Research) released with the paper ViViT: A Video Vision Transformer by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid.
Wav2Vec2 (from Facebook AI) released with the paper wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
Wav2Vec2-Conformer (from Facebook AI) released with the paper FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
Wav2Vec2Phoneme (from Facebook AI) released with the paper Simple and Effective Zero-shot Cross-lingual Phoneme Recognition by Qiantong Xu, Alexei Baevski, Michael Auli.
WavLM (from Microsoft Research) released with the paper WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
Whisper (from OpenAI) released with the paper Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
X-CLIP (from Microsoft Research) released with the paper Expanding Language-Image Pretrained Models for General Video Recognition by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
X-MOD (from Meta AI) released with the paper Lifting the Curse of Multilinguality by Pre-training Modular Transformers by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe.
XGLM (From Facebook AI) released with the paper Few-shot Learning with Multilingual Language Models by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
XLM (from Facebook) released together with the paper Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau.
XLM-ProphetNet (from Microsoft Research) released with the paper ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
XLM-RoBERTa (from Facebook AI), released together with the paper Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
XLM-RoBERTa-XL (from Facebook AI), released together with the paper Larger-Scale Transformers for Multilingual Masked Language Modeling by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
XLM-V (from Meta AI) released with the paper XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa.
XLNet (from Google/CMU) released with the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
XLS-R (from Facebook AI) released with the paper XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
XLSR-Wav2Vec2 (from Facebook AI) released with the paper Unsupervised Cross-Lingual Representation Learning For Speech Recognition by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
YOLOS (from Huazhong University of Science & Technology) released with the paper You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
YOSO (from the University of Wisconsin - Madison) released with the paper You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
Supported frameworks
The table below represents the current support in the library for each of those models, whether they have a Python tokenizer (called “slow”). A “fast” tokenizer backed by the 🤗 Tokenizers library, whether they have support in Jax (via Flax), PyTorch, and/or TensorFlow.
Model PyTorch support TensorFlow support Flax Support
ALBERT ✅ ✅ ✅
ALIGN ✅ ❌ ❌
AltCLIP ✅ ❌ ❌
Audio Spectrogram Transformer ✅ ❌ ❌
Autoformer ✅ ❌ ❌
Bark ✅ ❌ ❌
BART ✅ ✅ ✅
BEiT ✅ ❌ ✅
BERT ✅ ✅ ✅
Bert Generation ✅ ❌ ❌
BigBird ✅ ❌ ✅
BigBird-Pegasus ✅ ❌ ❌
BioGpt ✅ ❌ ❌
BiT ✅ ❌ ❌
Blenderbot ✅ ✅ ✅
BlenderbotSmall ✅ ✅ ✅
BLIP ✅ ✅ ❌
BLIP-2 ✅ ❌ ❌
BLOOM ✅ ❌ ✅
BridgeTower ✅ ❌ ❌
BROS ✅ ❌ ❌
CamemBERT ✅ ✅ ❌
CANINE ✅ ❌ ❌
Chinese-CLIP ✅ ❌ ❌
CLAP ✅ ❌ ❌
CLIP ✅ ✅ ✅
CLIPSeg ✅ ❌ ❌
CodeGen ✅ ❌ ❌
CodeLlama ✅ ❌ ❌
Conditional DETR ✅ ❌ ❌
ConvBERT ✅ ✅ ❌
ConvNeXT ✅ ✅ ❌
ConvNeXTV2 ✅ ❌ ❌
CPM-Ant ✅ ❌ ❌
CTRL ✅ ✅ ❌
CvT ✅ ✅ ❌
Data2VecAudio ✅ ❌ ❌
Data2VecText ✅ ❌ ❌
Data2VecVision ✅ ✅ ❌
DeBERTa ✅ ✅ ❌
DeBERTa-v2 ✅ ✅ ❌
Decision Transformer ✅ ❌ ❌
Deformable DETR ✅ ❌ ❌
DeiT ✅ ✅ ❌
DETA ✅ ❌ ❌
DETR ✅ ❌ ❌
DiNAT ✅ ❌ ❌
DINOv2 ✅ ❌ ❌
DistilBERT ✅ ✅ ✅
DonutSwin ✅ ❌ ❌
DPR ✅ ✅ ❌
DPT ✅ ❌ ❌
EfficientFormer ✅ ✅ ❌
EfficientNet ✅ ❌ ❌
ELECTRA ✅ ✅ ✅
EnCodec ✅ ❌ ❌
Encoder decoder ✅ ✅ ✅
ERNIE ✅ ❌ ❌
ErnieM ✅ ❌ ❌
ESM ✅ ✅ ❌
FairSeq Machine-Translation ✅ ❌ ❌
Falcon ✅ ❌ ❌
FlauBERT ✅ ✅ ❌
FLAVA ✅ ❌ ❌
FNet ✅ ❌ ❌
FocalNet ✅ ❌ ❌
Funnel Transformer ✅ ✅ ❌
GIT ✅ ❌ ❌
GLPN ✅ ❌ ❌
GPT Neo ✅ ❌ ✅
GPT NeoX ✅ ❌ ❌
GPT NeoX Japanese ✅ ❌ ❌
GPT-J ✅ ✅ ✅
GPT-Sw3 ✅ ✅ ✅
GPTBigCode ✅ ❌ ❌
GPTSAN-japanese ✅ ❌ ❌
Graphormer ✅ ❌ ❌
GroupViT ✅ ✅ ❌
Hubert ✅ ✅ ❌
I-BERT ✅ ❌ ❌
IDEFICS ✅ ❌ ❌
ImageGPT ✅ ❌ ❌
Informer ✅ ❌ ❌
InstructBLIP ✅ ❌ ❌
Jukebox ✅ ❌ ❌
LayoutLM ✅ ✅ ❌
LayoutLMv2 ✅ ❌ ❌
LayoutLMv3 ✅ ✅ ❌
LED ✅ ✅ ❌
LeViT ✅ ❌ ❌
LiLT ✅ ❌ ❌
LLaMA ✅ ❌ ❌
Longformer ✅ ✅ ❌
LongT5 ✅ ❌ ✅
LUKE ✅ ❌ ❌
LXMERT ✅ ✅ ❌
M-CTC-T ✅ ❌ ❌
M2M100 ✅ ❌ ❌
Marian ✅ ✅ ✅
MarkupLM ✅ ❌ ❌
Mask2Former ✅ ❌ ❌
MaskFormer ✅ ❌ ❌
mBART ✅ ✅ ✅
MEGA ✅ ❌ ❌
Megatron-BERT ✅ ❌ ❌
MGP-STR ✅ ❌ ❌
Mistral ✅ ❌ ❌
MobileBERT ✅ ✅ ❌
MobileNetV1 ✅ ❌ ❌
MobileNetV2 ✅ ❌ ❌
MobileViT ✅ ✅ ❌
MobileViTV2 ✅ ❌ ❌
MPNet ✅ ✅ ❌
MPT ✅ ❌ ❌
MRA ✅ ❌ ❌
MT5 ✅ ✅ ✅
MusicGen ✅ ❌ ❌
MVP ✅ ❌ ❌
NAT ✅ ❌ ❌
Nezha ✅ ❌ ❌
NLLB-MOE ✅ ❌ ❌
Nougat ✅ ✅ ✅
Nyströmformer ✅ ❌ ❌
OneFormer ✅ ❌ ❌
OpenAI GPT ✅ ✅ ❌
OpenAI GPT-2 ✅ ✅ ✅
OpenLlama ✅ ❌ ❌
OPT ✅ ✅ ✅
OWL-ViT ✅ ❌ ❌
Pegasus ✅ ✅ ✅
PEGASUS-X ✅ ❌ ❌
Perceiver ✅ ❌ ❌
Persimmon ✅ ❌ ❌
Pix2Struct ✅ ❌ ❌
PLBart ✅ ❌ ❌
PoolFormer ✅ ❌ ❌
Pop2Piano ✅ ❌ ❌
ProphetNet ✅ ❌ ❌
PVT ✅ ❌ ❌
QDQBert ✅ ❌ ❌
RAG ✅ ✅ ❌
REALM ✅ ❌ ❌
Reformer ✅ ❌ ❌
RegNet ✅ ✅ ✅
RemBERT ✅ ✅ ❌
ResNet ✅ ✅ ✅
RetriBERT ✅ ❌ ❌
RoBERTa ✅ ✅ ✅
RoBERTa-PreLayerNorm ✅ ✅ ✅
RoCBert ✅ ❌ ❌
RoFormer ✅ ✅ ✅
RWKV ✅ ❌ ❌
SAM ✅ ✅ ❌
SegFormer ✅ ✅ ❌
SEW ✅ ❌ ❌
SEW-D ✅ ❌ ❌
Speech Encoder decoder ✅ ❌ ✅
Speech2Text ✅ ✅ ❌
Speech2Text2 ❌ ❌ ❌
SpeechT5 ✅ ❌ ❌
Splinter ✅ ❌ ❌
SqueezeBERT ✅ ❌ ❌
SwiftFormer ✅ ❌ ❌
Swin Transformer ✅ ✅ ❌
Swin Transformer V2 ✅ ❌ ❌
Swin2SR ✅ ❌ ❌
SwitchTransformers ✅ ❌ ❌
T5 ✅ ✅ ✅
Table Transformer ✅ ❌ ❌
TAPAS ✅ ✅ ❌
Time Series Transformer ✅ ❌ ❌
TimeSformer ✅ ❌ ❌
Trajectory Transformer ✅ ❌ ❌
Transformer-XL ✅ ✅ ❌
TrOCR ✅ ❌ ❌
TVLT ✅ ❌ ❌
UMT5 ✅ ❌ ❌
UniSpeech ✅ ❌ ❌
UniSpeechSat ✅ ❌ ❌
UPerNet ✅ ❌ ❌
VAN ✅ ❌ ❌
VideoMAE ✅ ❌ ❌
ViLT ✅ ❌ ❌
Vision Encoder decoder ✅ ✅ ✅
VisionTextDualEncoder ✅ ✅ ✅
VisualBERT ✅ ❌ ❌
ViT ✅ ✅ ✅
ViT Hybrid ✅ ❌ ❌
VitDet ✅ ❌ ❌
ViTMAE ✅ ✅ ❌
ViTMatte ✅ ❌ ❌
ViTMSN ✅ ❌ ❌
VITS ✅ ❌ ❌
ViViT ✅ ❌ ❌
Wav2Vec2 ✅ ✅ ✅
Wav2Vec2-Conformer ✅ ❌ ❌
WavLM ✅ ❌ ❌
Whisper ✅ ✅ ✅
X-CLIP ✅ ❌ ❌
X-MOD ✅ ❌ ❌
XGLM ✅ ✅ ✅
XLM ✅ ✅ ❌
XLM-ProphetNet ✅ ❌ ❌
XLM-RoBERTa ✅ ✅ ✅
XLM-RoBERTa-XL ✅ ❌ ❌
XLNet ✅ ✅ ❌
YOLOS ✅ ❌ ❌
YOSO ✅ ❌ ❌ |
https://huggingface.co/docs/safetensors | You are viewing main version, which requires
installation from source
. If you'd like regular pip install, checkout the latest stable version (v0.3.2).
Safetensors
Safetensors is a new simple format for storing tensors safely (as opposed to pickle) and that is still fast (zero-copy). Safetensors is really fast 🚀.
Installation
with pip:
with conda:
conda install -c huggingface safetensors
Usage
Load tensors
from safetensors import safe_open
tensors = {}
with safe_open("model.safetensors", framework="pt", device=0) as f:
for k in f.keys():
tensors[k] = f.get_tensor(k)
Loading only part of the tensors (interesting when running on multiple GPU)
from safetensors import safe_open
tensors = {}
with safe_open("model.safetensors", framework="pt", device=0) as f:
tensor_slice = f.get_slice("embedding")
vocab_size, hidden_dim = tensor_slice.get_shape()
tensor = tensor_slice[:, :hidden_dim]
Save tensors
import torch
from safetensors.torch import save_file
tensors = {
"embedding": torch.zeros((2, 2)),
"attention": torch.zeros((2, 3))
}
save_file(tensors, "model.safetensors")
Format
Let’s say you have safetensors file named model.safetensors, then model.safetensors will have the following internal format:
Featured Projects
Safetensors is being used widely at leading AI enterprises, such as Hugging Face, EleutherAI, and StabilityAI. Here is a non-exhaustive list of projects that are using safetensors:
huggingface/transformers
AUTOMATIC1111/stable-diffusion-webui
Llama-cpp
microsoft/TaskMatrix
hpcaitech/ColossalAI
huggingface/pytorch-image-models
CivitAI
huggingface/diffusers
coreylowman/dfdx
invoke-ai/InvokeAI
oobabooga/text-generation-webui
Sanster/lama-cleaner
PaddlePaddle/PaddleNLP
AIGC-Audio/AudioGPT
brycedrennan/imaginAIry
comfyanonymous/ComfyUI
LianjiaTech/BELLE
alvarobartt/safejax
MaartenGr/BERTopic
LaurentMazare/tch-rs
chainyo/tensorshare |
https://huggingface.co/docs/diffusers | Diffusers
🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions.
The library has three main components:
State-of-the-art diffusion pipelines for inference with just a few lines of code.
Interchangeable noise schedulers for balancing trade-offs between generation speed and quality.
Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems.
Supported pipelines
Pipeline Paper/Repository Tasks
alt_diffusion AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities Image-to-Image Text-Guided Generation
audio_diffusion Audio Diffusion Unconditional Audio Generation
controlnet Adding Conditional Control to Text-to-Image Diffusion Models Image-to-Image Text-Guided Generation
cycle_diffusion Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance Image-to-Image Text-Guided Generation
dance_diffusion Dance Diffusion Unconditional Audio Generation
ddpm Denoising Diffusion Probabilistic Models Unconditional Image Generation
ddim Denoising Diffusion Implicit Models Unconditional Image Generation
if IF Image Generation
if_img2img IF Image-to-Image Generation
if_inpainting IF Image-to-Image Generation
latent_diffusion High-Resolution Image Synthesis with Latent Diffusion Models Text-to-Image Generation
latent_diffusion High-Resolution Image Synthesis with Latent Diffusion Models Super Resolution Image-to-Image
latent_diffusion_uncond High-Resolution Image Synthesis with Latent Diffusion Models Unconditional Image Generation
paint_by_example Paint by Example: Exemplar-based Image Editing with Diffusion Models Image-Guided Image Inpainting
pndm Pseudo Numerical Methods for Diffusion Models on Manifolds Unconditional Image Generation
score_sde_ve Score-Based Generative Modeling through Stochastic Differential Equations Unconditional Image Generation
score_sde_vp Score-Based Generative Modeling through Stochastic Differential Equations Unconditional Image Generation
semantic_stable_diffusion Semantic Guidance Text-Guided Generation
stable_diffusion_adapter T2I-Adapter Image-to-Image Text-Guided Generation
stable_diffusion_text2img Stable Diffusion Text-to-Image Generation
stable_diffusion_img2img Stable Diffusion Image-to-Image Text-Guided Generation
stable_diffusion_inpaint Stable Diffusion Text-Guided Image Inpainting
stable_diffusion_panorama MultiDiffusion Text-to-Panorama Generation
stable_diffusion_pix2pix InstructPix2Pix: Learning to Follow Image Editing Instructions Text-Guided Image Editing
stable_diffusion_pix2pix_zero Zero-shot Image-to-Image Translation Text-Guided Image Editing
stable_diffusion_attend_and_excite Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models Text-to-Image Generation
stable_diffusion_self_attention_guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance Text-to-Image Generation Unconditional Image Generation
stable_diffusion_image_variation Stable Diffusion Image Variations Image-to-Image Generation
stable_diffusion_latent_upscale Stable Diffusion Latent Upscaler Text-Guided Super Resolution Image-to-Image
stable_diffusion_model_editing Editing Implicit Assumptions in Text-to-Image Diffusion Models Text-to-Image Model Editing
stable_diffusion_2 Stable Diffusion 2 Text-to-Image Generation
stable_diffusion_2 Stable Diffusion 2 Text-Guided Image Inpainting
stable_diffusion_2 Depth-Conditional Stable Diffusion Depth-to-Image Generation
stable_diffusion_2 Stable Diffusion 2 Text-Guided Super Resolution Image-to-Image
stable_diffusion_safe Safe Stable Diffusion Text-Guided Generation
stable_unclip Stable unCLIP Text-to-Image Generation
stable_unclip Stable unCLIP Image-to-Image Text-Guided Generation
stochastic_karras_ve Elucidating the Design Space of Diffusion-Based Generative Models Unconditional Image Generation
text_to_video_sd Modelscope’s Text-to-video-synthesis Model in Open Domain Text-to-Video Generation
unclip Hierarchical Text-Conditional Image Generation with CLIP Latents(implementation by kakaobrain) Text-to-Image Generation
versatile_diffusion Versatile Diffusion: Text, Images and Variations All in One Diffusion Model Text-to-Image Generation
versatile_diffusion Versatile Diffusion: Text, Images and Variations All in One Diffusion Model Image Variations Generation
versatile_diffusion Versatile Diffusion: Text, Images and Variations All in One Diffusion Model Dual Image and Text Guided Generation
vq_diffusion Vector Quantized Diffusion Model for Text-to-Image Synthesis Text-to-Image Generation
stable_diffusion_ldm3d LDM3D: Latent Diffusion Model for 3D Text to Image and Depth Generation |
https://huggingface.co/docs/huggingface_hub | 🤗 Hub client library
The huggingface_hub library allows you to interact with the Hugging Face Hub, a machine learning platform for creators and collaborators. Discover pre-trained models and datasets for your projects or play with the hundreds of machine learning apps hosted on the Hub. You can also create and share your own models and datasets with the community. The huggingface_hub library provides a simple way to do all these things with Python.
Read the quick start guide to get up and running with the huggingface_hub library. You will learn how to download files from the Hub, create a repository, and upload files to the Hub. Keep reading to learn more about how to manage your repositories on the 🤗 Hub, how to interact in discussions or even how to access the Inference API.
Contribute
All contributions to the huggingface_hub are welcomed and equally valued! 🤗 Besides adding or fixing existing issues in the code, you can also help improve the documentation by making sure it is accurate and up-to-date, help answer questions on issues, and request new features you think will improve the library. Take a look at the contribution guide to learn more about how to submit a new issue or feature request, how to submit a pull request, and how to test your contributions to make sure everything works as expected.
Contributors should also be respectful of our code of conduct to create an inclusive and welcoming collaborative space for everyone. |
https://huggingface.co/docs/tokenizers | Tokenizers
Fast State-of-the-art tokenizers, optimized for both research and production
🤗 Tokenizers provides an implementation of today’s most used tokenizers, with a focus on performance and versatility. These tokenizers are also used in 🤗 Transformers.
Main features:
Train new vocabularies and tokenize, using today’s most used tokenizers.
Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server’s CPU.
Easy to use, but also extremely versatile.
Designed for both research and production.
Full alignment tracking. Even with destructive normalization, it’s always possible to get the part of the original sentence that corresponds to any token.
Does all the pre-processing: Truncation, Padding, add the special tokens your model needs. |
https://huggingface.co/docs/transformers.js | Transformers.js
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Transformers.js is designed to be functionally equivalent to Hugging Face’s transformers python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:
📝 Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
🖼️ Computer Vision: image classification, object detection, and segmentation.
🗣️ Audio: automatic speech recognition and audio classification.
🐙 Multimodal: zero-shot image classification.
Transformers.js uses ONNX Runtime to run models in the browser. The best part about it, is that you can easily convert your pretrained PyTorch, TensorFlow, or JAX models to ONNX using 🤗 Optimum.
For more information, check out the full documentation.
Quick tour
It’s super simple to translate from existing code! Just like the python library, we support the pipeline API. Pipelines group together a pretrained model with preprocessing of inputs and postprocessing of outputs, making it the easiest way to run models with the library.
Python (original) Javascript (ours)
from transformers import pipeline
pipe = pipeline('sentiment-analysis')
out = pipe('I love transformers!')
import { pipeline } from '@xenova/transformers';
let pipe = await pipeline('sentiment-analysis');
let out = await pipe('I love transformers!');
You can also use a different model by specifying the model id or path as the second argument to the pipeline function. For example:
let pipe = await pipeline('sentiment-analysis', 'nlptown/bert-base-multilingual-uncased-sentiment');
Contents
The documentation is organized into 4 sections:
GET STARTED provides a quick tour of the library and installation instructions to get up and running.
TUTORIALS are a great place to start if you’re a beginner! We also include sample applications for you to play around with!
DEVELOPER GUIDES show you how to use the library to achieve a specific goal.
API REFERENCE describes all classes and functions, as well as their available parameters and types.
Supported tasks/models
Here is the list of all tasks and architectures currently supported by Transformers.js. If you don’t see your task/model listed here or it is not yet supported, feel free to open up a feature request here.
To find compatible models on the Hub, select the “transformers.js” library tag in the filter menu (or visit this link). You can refine your search by selecting the task you’re interested in (e.g., text-classification).
Tasks
Natural Language Processing
Task ID Description Supported?
Conversational conversational Generating conversational text that is relevant, coherent and knowledgable given a prompt. ❌
Fill-Mask fill-mask Masking some of the words in a sentence and predicting which words should replace those masks. ✅ (docs)
(models)
Question Answering question-answering Retrieve the answer to a question from a given text. ✅ (docs)
(models)
Sentence Similarity sentence-similarity Determining how similar two texts are. ✅ (docs)
(models)
Summarization summarization Producing a shorter version of a document while preserving its important information. ✅ (docs)
(models)
Table Question Answering table-question-answering Answering a question about information from a given table. ❌
Text Classification text-classification or sentiment-analysis Assigning a label or class to a given text. ✅ (docs)
(models)
Text Generation text-generation Producing new text by predicting the next word in a sequence. ✅ (docs)
(models)
Text-to-text Generation text2text-generation Converting one text sequence into another text sequence. ✅ (docs)
(models)
Token Classification token-classification or ner Assigning a label to each token in a text. ✅ (docs)
(models)
Translation translation Converting text from one language to another. ✅ (docs)
(models)
Zero-Shot Classification zero-shot-classification Classifying text into classes that are unseen during training. ✅ (docs)
(models)
Vision
Task ID Description Supported?
Depth Estimation depth-estimation Predicting the depth of objects present in an image. ❌
Image Classification image-classification Assigning a label or class to an entire image. ✅ (docs)
(models)
Image Segmentation image-segmentation Divides an image into segments where each pixel is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation. ✅ (docs)
(models)
Image-to-Image image-to-image Transforming a source image to match the characteristics of a target image or a target image domain. ❌
Mask Generation mask-generation Generate masks for the objects in an image. ❌
Object Detection object-detection Identify objects of certain defined classes within an image. ✅ (docs)
(models)
Video Classification n/a Assigning a label or class to an entire video. ❌
Unconditional Image Generation n/a Generating images with no condition in any context (like a prompt text or another image). ❌
Audio
Task ID Description Supported?
Audio Classification audio-classification Assigning a label or class to a given audio. ✅ (docs)
(models)
Audio-to-Audio n/a Generating audio from an input audio source. ❌
Automatic Speech Recognition automatic-speech-recognition Transcribing a given audio into text. ✅ (docs)
(models)
Text-to-Speech n/a Generating natural-sounding speech given text input. ❌
Tabular
Task ID Description Supported?
Tabular Classification n/a Classifying a target category (a group) based on set of attributes. ❌
Tabular Regression n/a Predicting a numerical value given a set of attributes. ❌
Multimodal
Task ID Description Supported?
Document Question Answering document-question-answering Answering questions on document images. ✅ (docs)
(models)
Feature Extraction feature-extraction Transforming raw data into numerical features that can be processed while preserving the information in the original dataset. ✅ (docs)
(models)
Image-to-Text image-to-text Output text from a given image. ✅ (docs)
(models)
Text-to-Image text-to-image Generates images from input text. ❌
Visual Question Answering visual-question-answering Answering open-ended questions based on an image. ❌
Zero-Shot Image Classification zero-shot-image-classification Classifying images into classes that are unseen during training. ✅ (docs)
(models)
Reinforcement Learning
Task ID Description Supported?
Reinforcement Learning n/a Learning from actions by interacting with an environment through trial and error and receiving rewards (negative or positive) as feedback. ❌
Models
ALBERT (from Google Research and the Toyota Technological Institute at Chicago) released with the paper ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
BART (from Facebook) released with the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
BEiT (from Microsoft) released with the paper BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong, Furu Wei.
BERT (from Google) released with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
Blenderbot (from Facebook) released with the paper Recipes for building an open-domain chatbot by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
BlenderbotSmall (from Facebook) released with the paper Recipes for building an open-domain chatbot by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
BLOOM (from BigScience workshop) released by the BigScience Workshop.
CamemBERT (from Inria/Facebook/Sorbonne) released with the paper CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
CLIP (from OpenAI) released with the paper Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
CodeGen (from Salesforce) released with the paper A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
CodeLlama (from MetaAI) released with the paper Code Llama: Open Foundation Models for Code by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
DeBERTa (from Microsoft) released with the paper DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
DeBERTa-v2 (from Microsoft) released with the paper DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
DeiT (from Facebook) released with the paper Training data-efficient image transformers & distillation through attention by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
DETR (from Facebook) released with the paper End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
DistilBERT (from HuggingFace), released together with the paper DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into DistilGPT2, RoBERTa into DistilRoBERTa, Multilingual BERT into DistilmBERT and a German version of DistilBERT.
Donut (from NAVER), released together with the paper OCR-free Document Understanding Transformer by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
FLAN-T5 (from Google AI) released in the repository google-research/t5x by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
GPT Neo (from EleutherAI) released in the repository EleutherAI/gpt-neo by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
GPT NeoX (from EleutherAI) released with the paper GPT-NeoX-20B: An Open-Source Autoregressive Language Model by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
GPT-2 (from OpenAI) released with the paper Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodeiand Ilya Sutskever.
GPT-J (from EleutherAI) released in the repository kingoflolz/mesh-transformer-jax by Ben Wang and Aran Komatsuzaki.
GPTBigCode (from BigCode) released with the paper SantaCoder: don’t reach for the stars! by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
HerBERT (from Allegro.pl, AGH University of Science and Technology) released with the paper KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
LongT5 (from Google AI) released with the paper LongT5: Efficient Text-To-Text Transformer for Long Sequences by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
LLaMA (from The FAIR team of Meta AI) released with the paper LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
Llama2 (from The FAIR team of Meta AI) released with the paper Llama2: Open Foundation and Fine-Tuned Chat Models by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom.
M2M100 (from Facebook) released with the paper Beyond English-Centric Multilingual Machine Translation by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
MarianMT Machine translation models trained using OPUS data by Jörg Tiedemann. The Marian Framework is being developed by the Microsoft Translator Team.
mBART (from Facebook) released with the paper Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
mBART-50 (from Facebook) released with the paper Multilingual Translation with Extensible Multilingual Pretraining and Finetuning by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
MMS (from Facebook) released with the paper Scaling Speech Technology to 1,000+ Languages by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.
MobileBERT (from CMU/Google Brain) released with the paper MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
MobileViT (from Apple) released with the paper MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta and Mohammad Rastegari.
MPNet (from Microsoft Research) released with the paper MPNet: Masked and Permuted Pre-training for Language Understanding by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
MPT (from MosaiML) released with the repository llm-foundry by the MosaicML NLP Team.
MT5 (from Google AI) released with the paper mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
NLLB (from Meta) released with the paper No Language Left Behind: Scaling Human-Centered Machine Translation by the NLLB team.
OPT (from Meta AI) released with the paper OPT: Open Pre-trained Transformer Language Models by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
ResNet (from Microsoft Research) released with the paper Deep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
RoBERTa (from Facebook), released together with the paper RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
SqueezeBERT (from Berkeley) released with the paper SqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
Swin Transformer (from Microsoft) released with the paper Swin Transformer: Hierarchical Vision Transformer using Shifted Windows by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
T5 (from Google AI) released with the paper Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
T5v1.1 (from Google AI) released in the repository google-research/text-to-text-transfer-transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
Vision Transformer (ViT) (from Google AI) released with the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
Wav2Vec2 (from Facebook AI) released with the paper wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
WavLM (from Microsoft Research) released with the paper WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
Whisper (from OpenAI) released with the paper Robust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
XLM (from Facebook) released together with the paper Cross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau.
XLM-RoBERTa (from Facebook AI), released together with the paper Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
YOLOS (from Huazhong University of Science & Technology) released with the paper You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. |
https://huggingface.co/docs/timm | timm
timm is a library containing SOTA computer vision models, layers, utilities, optimizers, schedulers, data-loaders, augmentations, and training/evaluation scripts.
It comes packaged with >700 pretrained models, and is designed to be flexible and easy to use.
Read the quick start guide to get up and running with the timm library. You will learn how to load, discover, and use pretrained models included in the library. |
https://huggingface.co/docs/peft | PEFT
🤗 PEFT, or Parameter-Efficient Fine-Tuning (PEFT), is a library for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model’s parameters. PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. Recent state-of-the-art PEFT techniques achieve performance comparable to that of full fine-tuning.
PEFT is seamlessly integrated with 🤗 Accelerate for large-scale models leveraging DeepSpeed and Big Model Inference.
Supported methods
LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
Prefix Tuning: Prefix-Tuning: Optimizing Continuous Prompts for Generation, P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
P-Tuning: GPT Understands, Too
Prompt Tuning: The Power of Scale for Parameter-Efficient Prompt Tuning
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
IA3: Infused Adapter by Inhibiting and Amplifying Inner Activations
Supported models
The tables provided below list the PEFT methods and models supported for each task. To apply a particular PEFT method for a task, please refer to the corresponding Task guides.
Causal Language Modeling
Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
GPT-2 ✅ ✅ ✅ ✅ ✅
Bloom ✅ ✅ ✅ ✅ ✅
OPT ✅ ✅ ✅ ✅ ✅
GPT-Neo ✅ ✅ ✅ ✅ ✅
GPT-J ✅ ✅ ✅ ✅ ✅
GPT-NeoX-20B ✅ ✅ ✅ ✅ ✅
LLaMA ✅ ✅ ✅ ✅ ✅
ChatGLM ✅ ✅ ✅ ✅ ✅
Conditional Generation
Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
T5 ✅ ✅ ✅ ✅ ✅
BART ✅ ✅ ✅ ✅ ✅
Sequence Classification
Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
BERT ✅ ✅ ✅ ✅ ✅
RoBERTa ✅ ✅ ✅ ✅ ✅
GPT-2 ✅ ✅ ✅ ✅
Bloom ✅ ✅ ✅ ✅
OPT ✅ ✅ ✅ ✅
GPT-Neo ✅ ✅ ✅ ✅
GPT-J ✅ ✅ ✅ ✅
Deberta ✅ ✅ ✅
Deberta-v2 ✅ ✅ ✅
Token Classification
Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
BERT ✅ ✅
RoBERTa ✅ ✅
GPT-2 ✅ ✅
Bloom ✅ ✅
OPT ✅ ✅
GPT-Neo ✅ ✅
GPT-J ✅ ✅
Deberta ✅
Deberta-v2 ✅
Text-to-Image Generation
Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
Stable Diffusion ✅
Image Classification
Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
ViT ✅
Swin ✅
Image to text (Multi-modal models)
We have tested LoRA for ViT and Swin for fine-tuning on image classification. However, it should be possible to use LoRA for any ViT-based model from 🤗 Transformers. Check out the Image classification task guide to learn more. If you run into problems, please open an issue.
Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
Blip-2 ✅
Semantic Segmentation
As with image-to-text models, you should be able to apply LoRA to any of the segmentation models. It’s worth noting that we haven’t tested this with every architecture yet. Therefore, if you come across any issues, kindly create an issue report.
Model LoRA Prefix Tuning P-Tuning Prompt Tuning IA3
SegFormer ✅ |
https://huggingface.co/tasks | Tasks
Hugging Face is the home for all Machine Learning tasks. Here you can find what you need to get started with a task: demos, use cases, models, datasets, and more!
Computer Vision
Natural Language Processing
Audio
Tabular
Multimodal
Reinforcement Learning |
https://huggingface.co/support | Thomas Wolf, creator of Transformers
Ross Wightman, creator of timm (SOTA computer vision)
Colin Raffel, first author of T5 from Google
Morgan Funtowicz, contributor to ONNX
Abhishek Thakur, worldwide expert on auto-ML
Victor Sanh, author of DistilBERT
Anthony Moi, creator of Tokenizers
Julien Simon, author of “Learn Amazon SageMaker”
Meg Mitchell, worldwide expert on bias and ethics in AI
+40 more experts |
https://huggingface.co/huggingface | 👋 Hi!
We are on a mission to democratize good machine learning, one commit at a time.
If that sounds like something you should be doing, why don't you join us!
For press enquiries, you can ✉️ contact our team here. |
https://huggingface.co/brand | Hugging Face · Brand assets
HF Logos
.svg
.png
.ai
.svg
.png
.ai
HF Colors
HF Bio
Hugging Face is the collaboration platform for the machine learning community. The Hugging Face Hub works as a central place where anyone can share, explore, discover, and experiment with open-source ML. HF empowers the next generation of machine learning engineers, scientists, and end users to learn, collaborate and share their work to build an open and ethical AI future together. With the fast-growing community, some of the most used open-source ML libraries and tools, and a talented science team exploring the edge of tech, Hugging Face is at the heart of the AI revolution. |
https://huggingface.co/inference-endpoints | Machine Learning At Your Service
With 🤗 Inference Endpoints, easily deploy Transformers, Diffusers or any model on dedicated, fully managed infrastructure. Keep your costs low with our secure, compliant and flexible production solution.
Production Inference Made Easy
Deploy models on dedicated and secure infrastructure without dealing with containers and GPUs
Deploy models with just a few clicks
Turn your models into production ready APIs, without having to deal with infrastructure or MLOps.
Keep your production costs down
Leverage a fully-managed production solution for inference and just pay as you go for the raw compute you use.
Enterprise Security
Deploy models into secure offline endpoints only accessible via direct connection to your Virtual Private Cloud (VPCs).
How It Works
Deploy models for production in a few simple steps
1. Select your model
Select the model you want to deploy. You can deploy a custom model or any of the 60,000+ Transformers, Diffusers or Sentence Transformers models available on the 🤗 Hub for NLP, computer vision, or speech tasks.
2. Choose your cloud
Pick your cloud and select a region close to your data in compliance with your requirements (e.g. Europe, North America or Asia Pacific).
3. Select your security level
Protected Endpoints are accessible from the Internet and require valid authentication.
Public Endpoints are accessible from the Internet and do not require authentication.
Private Endpoints are only available through an intra-region secured AWS or Azure PrivateLink direct connection to a VPC and are not accessible from the Internet.
4. Create and manage your endpoint
Click create and your new endpoint is ready in a couple of minutes. Define autoscaling, access logs and monitoring, set custom metrics routes, manage endpoints programmatically with API/CLI, and rollback models - all super easily.
A Better Way to Go to Production
Scale your machine learning while keeping your costs low
Before
🤼
Struggle with MLOps and building the right infrastructure for production.
🐢
Wasted time deploying models slows down ML development.
😓
Deploying models in a compliant and secure way is difficult & time-consuming.
❌
87% of data science projects never make it into production.
After
🤝
Don't worry about infrastructure or MLOps, spend more time building models.
🚀
A fully-managed solution for model inference accelerates your ML roadmap.
🔒
Easily deploy your models in a secure and compliant environment.
✅
Seamless model deployment bridges the gap from research to production.
Customer Success Stories
Learn how leading AI teams use 🤗 Inference Endpoints to deploy models
Endpoints for Music
Customer
Musixmatch is the world’s leading music data company
Use Case
Custom text embeddings generation pipeline
Models Deployed
Distilbert-base-uncased-finetuned-sst-2-english
facebook/wav2vec2-base-960h
Custom model based on sentence transformers
Pricing
Pay for CPU & GPU compute resources
🛠Self-serve
🏢Enterprise
Inference Endpoints
Pay for compute resources uptime by the minute, billed monthly.
As low as $0.06 per CPU core/hr and $0.6 per GPU/hr.
Email Support
Email support and no SLAs.
Deploy your first model
Inference Endpoints
Custom pricing based on volume commit and annual contracts.
Dedicated Support & SLAs
Dedicated support, 24/7 SLAs, and uptime guarantees.
Request a Quote
Start now with 🤗 Inference Endpoints!
Deploy models in a few clicks 🤯
Pay for compute resources uptime, by the minute. |
https://huggingface.co/terms-of-service | Terms of Service
🗓 Effective Date: September 15, 2022
Thanks for using Hugging Face and being part of our awesome community! 🤗
We drafted the following Terms of Service to make your user experience as smooth, private and secure as possible. We are very much open to feedback - contact us at legal@huggingface.co with any question or concern!
Please read these Terms carefully as they contain important information about what we do and do not offer, and what you can and cannot do.
Whenever you want to use or purchase the Services that we provide at https://huggingface.co and related sites (the "Website"), these Terms of Service, together with our Supplemental Terms, notices and policies available at https://huggingface.co, and/or any other binding document that we provide and/or that you sign (the “Terms” or the “Agreement”) will apply to you.
In other words, these Terms are a binding agreement between us, Hugging Face, Inc. a Delaware corporation ("Hugging Face", "Company", "us", "we"), and You, whether you are a user ("User", "You") or a customer ("Customer", "you"). You should also carefully review all of our other policies available on our Website, including our Privacy Policy.
By accessing, using or purchasing the Services, you consent to all of these Terms and policies. So, if you do not agree with any of those, please do not access, use or purchase the Services.
We may change or update the Terms from time to time. Changes will be effective 10 days following posting on the Website. If you continue using the Services 10 days following such posting, that means you accept those changes.
We may also post and update supplemental terms for specific Services ("Supplemental Terms"), and such Supplemental Terms will also apply to you.
📚 A few definitions
Let's make sure we speak the same language!
"Account" is the account that you, or your entity, will create on the Website to access, use or purchase our Services. It must be secured by a strong password!
"Agreement" or "Terms" refer to all of the terms and conditions that apply between us. They include these Terms, Supplemental Terms, notices and policies available at https://huggingface.co, and/or any other binding document that we provide and/or that you sign, including but not limited to an Order Form, a Scope of Work or a Master Services Agreement.
"Inference API" refers to hosted services available on https://huggingface.co that let you, individuals, companies or organizations, run inference via application programming interface on machine learning models publicly or privately hosted on Hugging Face's model hub.
"Content" refers to any material posted, displayed, or accessed on our Website or Hub, including but not limited to code, data, text, graphics, images, applications, or software you, we, or any third party provide or make available.
"Dataset" refers to a structured collection of data samples used to train machine learning Models.
"Effective Date" refers to the last date of signature of the Agreement, or any other binding document.
"Inference Endpoint Service" refers to the Hugging Face Hub service through which the Customer can create, edit, manage and delete Managed Endpoints.
"Hugging Face" refers to Hugging Face Inc., which may perform its obligations through its affiliates, directors, subsidiaries, contractors, licensors, officers, agents and/or employees.
"Hugging Face Hub", or "Hub" refers to the hosting platform where Users can build, benchmark, share, version and deploy Repositories, which may include Models, Datasets and Machine Learning Applications.
"Hugging Face Open-Source Libraries" refers to the Hugging Face open-source software projects available at https://github.com/huggingface/, including Transformers, Datasets and Tokenizers.
"Infinity Service" refers to the containerized solution to deploy end-to-end optimized inference pipelines for state of the art Transformers Models, available at https://huggingface.co/infinity.
"Machine Learning Application" refers to a repository hosted on the Hub that allows a User to showcase Machine Learning experiments.
"Model" refers to a pre-trained machine learning model including algorithms and weights, which can be run to make predictions.
"Order Form" refers to the document shared by Hugging Face to the Customer describing the quantity of services ordered by the Customer and the fees payable for such services. Additional Order Forms may be negotiated between and executed by the Parties, and shall be incorporated into the Agreement.
"Organization" refers to a workspace representing a legal entity and/or several Users. A User can be part of multiple organizations.
"Premium Support" refers to qualified information or any other materials provided by Hugging Face via email or any other instant messaging or communication service to the Customer to address the Customer’s questions on the use and optimization of the Hugging Face Open-Source Libraries, Hugging Face or Customer Models.
"Repository" refers to a data structure which contains all of the project files and the entire revision history. A Repository may be public (i.e. anyone on the internet can see it, but only you or members of your organization can make changes) or private (i.e. only you or members of your organization can see and make changes to the repository).
"Services" refer to the products and/or services we offer or provide, and that you access, use or purchase. Services may include limited licenses or subscriptions to access or use certain offerings in accordance with these Terms, including use of Models, Datasets, Hugging Face Open-Sources Libraries, the Inference API, AutoTrain, Expert Acceleration Program, Infinity or other Content. Reference to "purchases" and/or "sales" mean a limited right to access and use a Service (not a transfer or any ownership right, title, or interest) in accordance with these Terms.
"User" refers to the individual person, company or organization that accesses, receives, or uses the Services. That's you!
👩💻 Your Use of the Services
Here are the Services we offer, and how you should use them.
We provide Services in the field of machine learning, here is the list:
Open-Source Libraries: including Transformers, Datasets and Tokenizers
Hugging Face public Hub: where you can build, benchmark, share, version and deploy Models, Datasets and Machine Learning Applications accessible by all Users
Hugging Face private Hub: where you can build, benchmark, share, version and deploy Models, Datasets and Machine Learning Applications, that are only accessible by You or your Organization(s)
Inference API Service: where you or your organization(s) can run inference via application programming interface on machine learning models publicly or privately hosted on our Model Hub
AutoTrain premium Service: create state of the art Models from your own training data, everything being automatically hosted privately on the Model Hub. You can then share them publicly and/or serve them through the Inference API Service for example.
Expert Acceleration Program: get Premium Support on the use of our open source and all our services and/or products.
Infinity Service: deploy end-to-end optimized inference pipelines for state of the art Transformers Models.
Hardware Partner Program: access State of the Art hardware and hardware-specific machine learning optimization techniques for production performance.
Inference Endpoints: easily deploy machine learning models on dedicated, secure and autoscaling infrastructure
🔜 More awesome Services. Stay tuned!! 🚀
You must use our Services in strict compliance with these Terms, the Supplemental Terms for each Service, all of our policies available on our Website, and all applicable laws or regulations in the relevant jurisdiction(s).
We may at any time modify, suspend, or discontinue, temporarily or permanently, the Services (or any part thereof) with or without notice. You agree that we will not be liable to you or to any third party for any modification, suspension or discontinuance of the Services.
👤 Your Account
Pretty basic, but necessary before accessing some of our Services!
In order to create an Account for yourself or for your Organization on our Website, you must be a natural person of at least age 13, or a legal entity duly registered. If you decide to create an Account for your Organization, you represent that you have the authority to act on behalf of your Organization and bind your Organization to these Terms.
When you create your Account, we are going to ask you to provide us with some basic information, such as your email address password, username, full name, and other optional information such as an avatar, your interests, usernames to your third-party social networks, or payment information if you decide to purchase one of our paid Services. All information must be accurate and valid.
🔒Security is very important to us, and we need every member of our community to cooperate. You are responsible for maintaining the confidentiality and security of your password necessary for accessing your Account and the Services. You may not disclose your password to any third party, and you are solely responsible for any action taken with your Account. You must notify us immediately of any actual or suspected breach of security on your Account, loss or compromise of password, or unauthorized use of your Account.
💬 Your Content
Wondering about what we do with your Content?
You are solely responsible for the Content you post, publish, display or otherwise make available on our Website, and for any other action or omission that results from your use of the Services (including our Content or other user's Content), or the use by a person or an entity that you have authorized under your Account.
You represent and warrant that you have ownership, control, and responsibility for the Content you post or otherwise make available on our Website, or otherwise have the right to do so. Your Content must not be misleading or unlawful, and must not violate any of these Terms, applicable law and regulation, or infringe or misappropriate any rights of any person or entity. We may remove your Content at any time, at our sole discretion, if we have a concern about your Content.
You own the Content you create! We will not sell your Content, nor will we use it in any other way as permitted under these Terms. However, by posting your Content or otherwise making it available on our Website, you must be aware that:
You hereby grant us a worldwide, royalty-free and non-exclusive license to use, display, publish, reproduce, distribute, and make derivative works of such Content to provide Services and as otherwise permitted under these Terms and our Privacy Policy; and,
Your Content will be viewed by others, and therefore:
If you decide to set your Repository public, you grant each User a perpetual, irrevocable, worldwide, royalty-free, non-exclusive license to use, display, publish, reproduce, distribute, and make derivative works of your Content through our Services and functionalities;
If you decide to set your Repository private, we will use reasonable and appropriate measures designed to keep your Content confidential, and protected from any unauthorized access or disclosure. However, we may access or share your private information pursuant to the terms set forth in our Privacy Policy.
When Content contains notice of a reasonable and customary license, (such as an open source license) such Content is intended to remain under the terms of such license when further accessed, distributed, or used. Neither party is permitted to remove reference to any such license.
Any Content you download, access or use from us or another User, is at your own risk and subject to these Terms and/or the terms accompanying such Content.
💰 Payment
We work hard to make our Services most useful to you, and we thank you for your trust and support!
Our plans and fees payable for the use of the Services you decide to purchase are available at https://huggingface.co/pricing. You may decide to choose a custom plan, in which case the payable fees and payment terms will be subject to further discussions and mutual agreement with us, and will be specified in the applicable Services Agreement, Order Form or any other binding document signed between us.
All fees are exclusive of any applicable taxes, which You are solely responsible to pay.
We reserve the right to adjust our pricing from time to time and at our sole discretion. In such event, prices will remain fixed during the term of your initial subscription, and adjusted fees will be applicable only after the term of your new subscription.
The plan is billed in advance on a monthly basis, and usage based fees, which apply if you go over your allotted usage, will be billed as they go.
💳 Payment is processed on the Website, which includes a third-party payment or credit card processor's services. The payment processor's or credit card company's agreement governs your use of the designated account or credit card you provide, and you must refer to that agreement and not these Terms to determine your rights and liabilities relating to such agreement, account and activities. By providing us with your account or credit card number and associated payment information, you agree that we are authorized to immediately invoice your account for all fees due and payable and that no additional notice or consent is required. You agree to immediately notify us of any change in your billing address or the account or credit card use for the payment. All fees are non-refundable and exclusive of any applicable taxes, which the Customer is solely responsible to pay. You will indemnify us for any taxes relating to your purchase or use of the Services, except for taxes relating to our income.
For certain Services, the Service Fees and payment terms may be specified in the Supplemental Terms and/or in any other binding document signed between us, including but not limited to an Order Form, a Scope of Work, or a Master Service Agreement, which are fully incorporated into the Agreement between us.
😔 Termination
Don't go! But if you do, here is what happens after termination.
You may decide to cancel your Account whenever you want, at your sole discretion.
We may do the same, and we reserve the right to suspend or terminate your access to the Services anytime with or without cause, and at our own discretion, with or without notice.
Upon cancellation of your Account, we will use commercially reasonable efforts to delete your information and Content of your own Repositories, whether public or private, within 90 days. We will not delete the Content that you contributed to other Users' Repositories, or copies made by us or other Users.
We also reserve the right to retain your information for legal or regulatory compliance, pursuant to standard archiving, recovery, and back-up processes and practices, and pursuant to our Privacy Policy.
For certain Services, the Service Term and causes for termination may be specified in the Supplemental Terms and/or in any other binding document signed between us, including but not limited to an Order Form, a Scope of Work, or a Master Service Agreement, which are fully incorporated into the Agreement between us.
🤐 Confidentiality
What is confidential, stays confidential.
All information relating to these Terms and/or during negotiations before the execution of any binding document that we may share between us shall be treated as confidential (“Confidential Information”).
During the Service Term, and for at least one (1) year thereafter, we expressly agree (i) to maintain the strict confidentiality of such Confidential Information, and to refrain from disclosing such Confidential Information to any third party, except as authorized by the original disclosing party in writing; (ii) to use such Confidential Information only for the purposes of performing its obligations or exercising its rights under this Agreement; and (iii) to use at least a reasonable standard of care in protecting the Confidential Information.
These restrictions on the use or disclosure of Confidential Information shall not apply to any Confidential Information (i) which has been independently developed by the receiving Party, as evidenced by its written records, (ii) which has been lawfully received free of restriction from another source having the right to furnish such Confidential Information; or (iii) after it has become generally available to the public without breach of this section by the receiving Party; or (iv) which at the time of disclosure was already known to the receiving Party, and free of restriction as evidenced by documentation in such Party's possession; or (v) which the disclosing Party confirms in writing is free of such restrictions; or, (vi) which is required to be disclosed in any legal proceeding, upon express request from a governmental or regulatory agency, and/or pursuant to a requirement of law (and only with respect to such disclosure).
Each of us may disclose Confidential Information only to our employees, agents or subcontractors who need it in order to exercise rights or perform obligations under the Agreement, and who are required to protect it against unauthorized disclosure or use in a manner no less protective than required under the Agreement.
Confidential Information is and shall at all times remain the exclusive property of the disclosing Party.
Upon termination or expiration of the Agreement, we ask you to promptly destroy or return all Confidential Information, and we will do the same if you ask us to do so.
💡 Intellectual Property
Let's give credit where it is due, and protect our intellectual property rights!
Proprietary Rights. We retain ownership of all of our intellectual property rights related to the Website and the Services, including all improvements to the Services. All materials that we produce, including the Website, design, code, graphics, interfaces, trademarks, and logo shall remain our exclusive property. You may not alter, reproduce, republish, license any of our proprietary materials, unless we expressly give you a written permission to do so. All rights not expressly granted are reserved and retained by us.
Nothing in these Terms is intended to limit our use of our knowledge, skills, experience, ideas, concepts, know-how and/or techniques developed or learned at any time, without limitation. If you provide us feedback regarding the use, operation, performance, or functionality of our Website, Services, or business (collectively, "Feedback"), you hereby grant us a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive right and license to exploit and commercialize the Feedback, improve the Services, and develop and/or commercialize new offerings, which we will solely and exclusively own. In addition and subject to our Privacy Policy, we may aggregate, anonymize, or otherwise learn from data relating to your use of the Services, and use the foregoing to improve those Services.
DMCA Policy. We comply with the Digital Millennium Copyright Act Policy! If you have any claims that any content on our Website violates or infringes your intellectual property rights, you may send your complaint to dmca@huggingface.co with detailed and accurate information supporting your claim. You also represent and warrant that you will not knowingly provide misleading information to support your claim.
Open Source. Certain items provided with the Services may be subject to "open source" or "creative commons" or other similar licenses (collectively, "Open Source"). The Open Source license terms are not intended to be replaced or overridden by the license and other terms of these Terms; however, the limitations of liabilities, disclaimers, and this provision apply to any such Open Source. Nothing in these Terms limit your rights under, or grants you rights that supersede, the terms and conditions of any applicable Open Source license. If we (or you) make modifications to Open Source, and if the applicable Open Source requires that such modifications be made available, and we (or you) do not already publish such modifications via the applicable open source community, then modifications will be available on applicable websites.
Supplemental Terms. Certain Services may be governed by specific intellectual property terms which are stated in the Supplemental Terms and/or in any other binding document signed between us, and which are fully incorporated into the Agreement between us.
⛔ Privacy
Your privacy is paramount to us, and here is how we respect it.
We will provide the Services in accordance with our Privacy Policy available at: https://huggingface.co/privacy.
🎓 Liability
Our worst case scenario.
Neither of us (or any of our affiliates, subsidiaries, contractors, licensors, officers, directors, agents, or employees ("Related Parties")) will be liable for any indirect, incidental, consequential, punitive, special, or other similar damages, including loss of revenue, profits, data, benefits, or savings, whether or not due to the fault or negligence of the company or related parties, and regardless of whether either of us or our related parties have been advised of the possibility of such damages or losses.
Either Party’s (and each Related Party’s) aggregate liability to the other Party or any third party in any circumstance will not exceed the amount that you paid us during the 12-month period immediately preceding the last claim (or $50 if relating to a free service). This limitation will not apply to (i) either party’s liability from fraud, gross negligence, recklessness, or willful or criminal misconduct, (ii) your liability for infringement of our intellectual property rights, (iii) your liability for breach of the confidentiality section, or (iv) amounts you owe us for the service made available as per your payment obligations.
👩⚖️ Indemnity
Your worst-case scenario.
You are solely and exclusively responsible for your use of the Services!
In this regard, you agree to indemnify, defend and hold harmless us and Related Parties from all claims, liability, and expenses, including attorney's fees, arising out or in connection with your use of (or inability to use) the Services, including but not limited to your violation of these Terms, applicable law or regulation, any Content or data posted or used by you, or any other party's use of any Service with your credentials, unless arising directly from Hugging Face’s fraud, gross negligence, recklessness, or willful or criminal misconduct, provided that we provide you with (i) a prompt written notice of the claim, demand, suit or proceeding, (ii) sole control of the defense and settlement of the claim, demand, suit or proceeding, and (iii) all reasonable assistance and cooperation in connection with the defense and settlement of the claim, at its own expense.
🙅♂️ Disclaimer of Warranties
There are certain promises that we cannot make...
We provide Services that you may or may not decide to access, use or purchase. In this regard, we make no warranties or representations about these Services.
In other words, except as expressly provided otherwise herein, and to the fullest extent permitted by law, the Services and Content are provided "as is" and "as available".
We disclaim all warranties or guarantees of any kind, express or implied, whether arising under any law or from any usage in trade, or otherwise, including but not limited to the implied warranties of merchantability, non-infringement, quiet enjoyment, fitness for a particular purpose, or otherwise.
We further disclaim all warranties or guarantees about the accuracy, reliability or benefits of any Services, artificial intelligence, Models or any other technology or Content, or that the Services or Content will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise.
You will be solely responsible for any damage resulting from your use of or access to the Services, your downloading of Content or data, or use of any other material provided by or through the Services.
📃 Miscellaneous
Almost done! These are important legal terms that you should also read.
Governing law and dispute resolution. These Terms and all matters regarding their interpretation and/or enforcement are governed by the Law of the State of New York, excluding its choice of law rules. If a dispute or claim relating to these Terms arises, we each agree to make a reasonable and good faith effort to agree on an out-of-court solution and to resolve the dispute. If no out-of-court settlement is reached, any related action, lawsuit, or proceeding must be brought and adjudicated exclusively by state or federal courts located in the city of New York, United States of America. Any claim, action, suit or proceeding relating to these Terms must be brought by you within one year of the event that gave rise to the claim or such claim is hereby waived to the maximum extent permitted by law.
Assignment. We may assign or transfer all or part of our rights and obligations under these Terms to an affiliate, successor or any other entity or person without obtaining your prior written consent. Conversely, you may not assign or transfer all or part of your rights and obligations under Terms without obtaining our prior written consent.
Subcontracting. We may subcontract all or part of our obligations under these Terms at our own discretion, and without notifying you. Nothing to worry, since we will bear the same degree of responsibility for acts and omissions of the subcontractors acting on our behalf in the performance of their obligations under these Terms as they would bear if such acts and omissions were performed by us directly.
Changes in law or regulation. If there is any change in law or regulation that would materially restrict or prohibit our ability to provide the Services pursuant to these Terms, we may suspend or cancel the Services, or otherwise amend these Terms.
Export Control and Sanctions. Any Service provided pursuant to these Terms may be subject to export control and sanctions laws of the U.S. and/or other applicable jurisdictions. Therefore, you may only access and use the Service in compliance with U.S. and other applicable export control and sanctions laws and regulations.
Headings. Headings used throughout these Terms are used for convenience and reference only, and have no legal effect, nor shall affect the interpretation of these Terms.
Entire Agreement. These Terms, together with all of the terms, policies and notices available at https://huggingface.co, or any other binding documents we provide, or agreements provided or executed by us, constitute the entire agreement between us, and supersedes all previous negotiations, proposals, commitments, writings, oral statement and understanding of any nature whatsoever. Any standard form purchase order or similar document you provided us or reference in any payment is expressly rejected if it differs from, or adds to, these Terms.
Order of Precedence. In the event of a conflict between provisions arising out of any documents included in the Agreement, the order of precedence will be as follows, unless expressly stated otherwise: (i) the applicable Order Form if any; (ii) the applicable Scope of Work if any; (iii) any other binding document signed between us; (iv) the Supplemental Terms; (v) these Terms of Service; (vi) all other documents or policies incorporated by reference in the Agreement.
Severability. If any provision of these Terms, by action of law or for any other reason, is held to be prohibited, invalid, void or unenforceable in any relevant jurisdiction, such provision will be stricken, and the remaining provisions of these Terms will remain in full force and effect.
No Waiver. The failure, in one or more instances, to perform any of the terms or conditions of these Terms, or to exercise any right hereunder, shall not be construed as a waiver of the future performance of any such terms or conditions, or the future exercise of such right, and the obligations of under these Terms with respect to such performance shall continue in full force and effect.
Survival. The termination or expiration of these Terms shall not relieve from any obligation (i) that may have arisen prior to such termination or expiration, or (ii) that needs to survive termination or expiration in order to give full effect to its meaning, including without limitation payment obligations, confidentiality obligations, limitation of liability, warranty disclaimers, indemnities, governing law and dispute resolution, miscellaneous, and definitions.
Execution. Each Party represents and warrants that (i) it possesses the legal right and capacity to enter into, execute, deliver and perform the Agreement; (ii) the individual signing the Agreement on the Party’s behalf has full power and authority to bind the Party to the terms and conditions set out in this Agreement; and (iii) the Agreement is a valid and binding obligation of that Party. You agree that an electronic signature shall have the same force and effect as manual signatures.
🔗 Quick links
Supplemental Terms
Privacy Policy |
https://huggingface.co/shop | Fresh and Hot
Eggceptional Hoodie
$64
Resort Cap
$30
Shadow Longsleeve
$45
Sport Socks
$12
Varsity Crewneck
$55
Wordmark Baseball Cap
$35
All the swags
BigCode Logo T-Shirt
$35
Logo Sticker
$1
Logo T-Shirt
$35
Mascot Baseball Cap
$35
Mascot Hoodie
$64
Mascot Sticker
$1
Mascot T-Shirt
$35
Party Mug
$15
Womens Logo Tee
$35
FAQs
For support, email shop@huggingface.co |
https://huggingface.co/learn | Hugging Face
Learn |
https://huggingface.co/blog | Ethics and Society Newsletter #5: Hugging Face Goes To Washington and Other Summer 2023 Musings
By September 29, 2023
Finetune Stable Diffusion Models with DDPO via TRL
By September 29, 2023 guest
Non-engineers guide: Train a LLaMA 2 chatbot
By September 28, 2023
Llama 2 on Amazon SageMaker a Benchmark
By September 26, 2023
Inference for PROs
By September 22, 2023
Rocket Money x Hugging Face: Scaling Volatile ML Models in Production
By September 19, 2023 guest
Introduction to 3D Gaussian Splatting
By September 18, 2023
Object Detection Leaderboard
By September 18, 2023 guest
Optimizing your LLM in production
By September 15, 2023
Introducing Würstchen: Fast Diffusion for Image Generation
By September 13, 2023
Fine-tuning Llama 2 70B using PyTorch FSDP
By September 13, 2023
Overview of natively supported quantization schemes in 🤗 Transformers
By September 12, 2023
SafeCoder vs. Closed-source Code Assistants
By September 11, 2023
Efficient Controllable Generation for SDXL with T2I-Adapters
By September 8, 2023 guest |
https://huggingface.co/privacy | Hugging Face Privacy Policy
🗓 Effective Date: March 28, 2023
We have implemented this Privacy Policy because your privacy is important to us. This Privacy Policy (the “Policy”) describes the type of information that Hugging Face, Inc. (the “Company”) gathers from users of the Hugging Face services (the “Services”), and how the Company uses that information.
The Policy is part of the Company Terms of Use. The Policy applies to all users of the Services (“Users”).
The Company occasionally collects Personal Information from Users. “Personal Information” means any information that can, alone or associated with other information, be used to identify an individual, for instance by reference of a name, a username, an email address, an IP address, a photograph or others.
By using the Services, you consent to the terms of the Policy and to our processing of Personal Information in the manner and for the purposes set forth herein. If you do not agree with the Policy, please do not use the Services.
The Company reserves the right, at its sole discretion, to change the Policy at any time, which change will be effective 10 days following posting the revision to the Policy on the Hugging Face website (the “Website”). Your continued use of the Services 10 days following such posting means you accept those changes.
1. INFORMATION WE COLLECT
The Company collects the following information, some of which might be Personal Information.
A. Information you provide directly
The Company collects information directly provided by Users as part of using the Services, such as:
information provided as part of setting up an account on the Website: email address, password, username, full name, and other optional information such as an avatar, your interests, or usernames to your third-party social networks,
payment information provided, if you decide to upgrade your user or organization account: credit card information,
other information and materials that you decide to post on the Website (e.g., the discussion forum, or other),
communications between you and the Company, as part of using the Services.
At any time during your use of the Services, you may decide to share some information or content publicly or privately.
If you decide to share your information or content publicly, and if you decide to include Personal Information, you understand that anyone may view this information.
If you decide to keep your information private and control the access to it, you understand that only the users that you authorize will view this information. The Company also reserves the right to access this information with your consent, or without your consent only for the purposes of pursuing legitimate interests such as maintaining security on its Services or complying with any legal or regulatory obligations.
B. Information we collect from third parties
We may collect Information from third parties that help us deliver the Services or process information.
C. Information we automatically collect from your use of the Services
The Company automatically records information from your use of the Services such as:
information about your Use of the Services, your session (date, location), your IP address,
information from cookies, especially your login information, your preferences,
information about your device: type, model, version, operating system, browser
D. Cookies
We use cookies only for the purposes of delivering, updating, monitoring, improving the Services, and maintaining security on our Services by detecting, preventing and responding to any type of threats or incidents.
We may collect Information through those cookies. If you do not wish to accept these cookies and you decide to disable them, you will not be able to access and use the Services.
E. “Do Not Track”
On September 27, 2013, California enacted A.B. 370, amending the California Online Privacy Protection Act to require website operators to disclose how they respond to "Do Not Track Signals"; and whether third parties collect personally identifiable information about users when they use online services.
The Company honors "do not track" signals and does not track, use cookies, or use advertising when a “do not track” mechanism is in place.
The Company does not authorize the collection of personally identifiable information from our users for third party use through advertising technologies without separate member consent.
California Civil Code Section 1798.83 also permits customers who are California residents to request certain information regarding Our disclosure of Personal Information to third parties for direct marketing purposes. To make such a request, please send an email to privacy@huggingface.co. Please note that the Company is only required to respond to one request per customer each year.
2. USE OF INFORMATION
Purposes of the use of Information
The Company may use information from Users for the following purposes:
to deliver the Services, which may include the creation of Your account, the display of Your profile or Your content, or if applicable the upgrading of Your account to a paid account,
to operate and improve the Services by providing you with more effective customer service, making the Services easier to use by eliminating the need for you to enter the same information repeatedly; performing research and analysis aimed at improving the Services, or other products and technologies of the Company; automatically updating the Services; diagnosing or fixing problems with the Services,
to conduct analysis or research on the Services or any topics related to it, for business operations or scientific purposes,
to communicate with you, especially through the sending of welcome emails, information on technical service issues, security announcements, information of new services available, legal notices, response to your requests, or any other information that we think might interest or be relevant to you,
to ensure and maintain security on our Services or Website, which may include detecting, preventing, investigating or otherwise addressing fraud or security issues,
to protect against harm to the rights, property or safety of the Company, our Users, yourself or the public,
to enforce any applicable terms of service or agreement, including investigations of potential violations thereof,
to comply with any applicable law, regulation, legal process or governmental requests.
B. Grounds for the use of Information
Pursuant to applicable data protection laws, and especially the European Union’s General Data Protection Regulation (EU) 2016/679 (the “GDPR”), Hugging Face remains under an obligation to notify the Users about the legal basis on which their Personal Information is processed.
Consent
By creating an account on the Website and by using the Services, you consent to disclose information, some of which might be personal, and to our processing of such Personal Information in the manner and for the purposes set forth in this Policy.
Agreement
If you or your organization enter into an agreement with Hugging Face, either by simply using the Services and abiding by the terms and conditions available on the Website, or by executing another separate agreement, you also consent to our processing of your Personal Information as pursuant the obligations of such an agreement.
Legitimate Interests
Apart from the above cases, Hugging Face will use the information collected from you to pursue legitimate interests such as legal or regulatory compliance, security control, business operations, scientific research, or any other interest reasonably held as legitimate.
3. SHARING OF INFORMATION
The Company will not sell, rent or lease your Personal Information except as provided for by this Policy. The Company may also share other information as provided by this Policy.
A. Affiliates
The Company may share User Information and Personal information collected by the Services with businesses that are legally part of the same group as the Company, or that become part of that group in the event of a change of control, merger, acquisition or sale (“Affiliates”).
B. Third Party Service Providers
The Company may occasionally hire other companies to provide limited services on its behalf, such as providing customer support, hosting websites, processing transactions, or performing statistical analysis of its Services. Those companies will be permitted to obtain only the Personal Information they need to deliver the relevant service. They will be required to maintain the confidentiality of the information and are prohibited from using it for any other purpose. Please refer to the list of Third Party Service Providers below.
C. With your consent
At any time during your use of the Services, or upon explicit request from us, you may consent to the disclosure of your information.
D. For security and safety purposes
In the event of any fraud, security threats or incidents, we reserve the right to disclose your information without your consent for the purposes of ensuring and maintaining security on our Website and for all of our Users, and detecting, preventing, investigating or otherwise addressing fraud or security issues.
Similarly, we reserve the right to disclose your information without your consent for the purpose of protecting against harm to the rights, property or safety of the Company, our Users, yourself or the public.
E. For legal or regulatory purposes
We also reserve the right to disclose your information without your consent to comply with any applicable law, regulation, legal process or governmental requests.
G. Anonymous Information
The Company may use Anonymous Information (as defined below) or disclose it to third party service providers, to provide and improve the Services and other products or technologies of the Company. The Company may disclose Anonymous Information (with or without compensation) to third parties, including advertisers and partners, for purposes including, but not limited to, targeting advertisements. "Anonymous Information" means information which does not enable identification of an individual User, such as aggregated information about use of the Services.
4. YOUR RIGHTS
A. Access your Information
You may be entitled under data protection laws to access and review Personal Information the Company holds related to you.
You may access, modify or delete the Information we collected by editing your profile or controlling the content that you share at any time.
If you have any other request, all such communications regarding access to Personal Information should be addressed to: privacy@huggingface.co. Such inquiries should be clearly marked as data protection queries and you should indicate if the request is time sensitive.
B. Data retention
We retain your Information for as long as necessary to deliver the Services, to comply with any applicable legal requirements, to maintain security and prevent incidents and, in general, to pursue our legitimate interests.
You may decide to cancel your account and your content at any time by editing your profile. If you wish to request the erasure of all of your Personal Information that we process, you may do so by sending a written request to privacy@huggingface.co.
5. DATA SECURITY
The security of your Personal Information is important to us. The Company follows generally accepted industry standards, including the use of appropriate administrative, physical and technical safeguards, to protect Personal Information. However, no method of transmission over the Internet, or method of electronic storage, is fully secure. Therefore, while the Company strives to use commercially acceptable means to protect Personal Information, the Company cannot guarantee its absolute security or confidentiality. If you have any questions about security, you can contact the Company at privacy@huggingface.co.
In the event of an incident affecting your Personal Information, we will act promptly acceptable means to identify and address the incident, and to notify you.
Please be aware that certain Personal Information and other information provided by you in connection with your use of the Services may be stored on your device (even if that Information is not collected by the Company). You are solely responsible for maintaining the security of your device from unauthorized access. Similarly, you also remain responsible for maintaining the confidentiality of your password or any other information that should reasonably be held confidential.
6. LOCATION OF PROCESSING AND DATA TRANSFERS
The Company and its servers are located in the United States.
Personal Information collected by the Services may be stored and processed in the United States or any other country in which the Company or its affiliates, subsidiaries or agents maintain facilities. By using the Services, you consent to any such transfer of information outside of your country. The Company may transfer your Personal Information to affiliated companies for the purpose of storing or processing such information on its behalf. Such information may be transferred to other countries around the world. The Company requires that these parties agree to process such information in compliance with the Policy.
In particular, if you provide Personal Information, it may be transferred to and processed on computers in the U.S. and other countries. We strive to take appropriate safeguards to ensure that your Personal Information will remain protected in a manner consistent with standard applicable data protection laws.
If you have any other question, please contact the Company at: privacy@huggingface.co.
7. CHILDREN’S PRIVACY
The Services are neither directed to nor structured to attract children under the age of 13 years. Accordingly, the Company does not intend to collect Personal Information from anyone it knows to be under 13 years of age. The Company will direct potential users under 13 years of age not to use the Services.
If the Company learns that Personal Information of persons less than 13 years of age has been collected without verifiable parental consent, the Company will take the appropriate steps to delete this information.
To make such a request, please contact the Company at: privacy@huggingface.co.
8. COMMUNICATIONS AND CAN-SPAM ACT
The Company may collect your email address in order to send information and respond to inquiries and/or other requests or questions.
The Company does not use false or misleading subjects or email addresses. The Company reasonably identifies advertisements and includes in its communications the physical address of its business location. The Company honors opt-out/unsubscribe requests. Users may follow the instructions at the bottom of an email from the Company in order to unsubscribe from correspondence.
9. CONTACT US
If you have questions about this Policy, please contact privacy@huggingface.co.
The main establishment in the European Union is Hugging Face, SAS, a French société par actions simplifiée à associé unique registered in the Paris Trade and Companies Register under the number 822 168 043, and whose headquarters are located on 9 rue des Colonnes, 75002 Paris, France. The designation of this main establishment in the European Union gives full authority to the French Data Protection Agency, la Commission Nationale de l'Informatique et des Libertés (CNIL) per the General Data Protection Regulation (GDPR).
10. LIST OF THIRD-PARTY SERVICE PROVIDERS
We may share your information with Third-Party Service Providers, and when we do, we ensure that they access your information in compliance with applicable data protection laws. The following list respects the format Subprocessor | Description of Subprocessing | Location of Subprocessing.
Discourse | Forum | United States
Intercom | Chat | United States
AWS SES | Emails | United States
Stripe | Payment | United States
Curated | Newsletters | United States
Google/Gsuite | Payment | United States
Google Cloud Platform | Hosting/Infrastructure | United States/EMEA
Docker.io | Hosting/Infrastructure | United States/EMEA
Circle CI Software | Integration and testing platform | United States
Github | Hosting code | United States
Amazon Web Services, Inc. | Hosting/Infrastructure | United States
OVH | Hosting/Infrastructure & Invoicing | France
Google Analytics | Analytics | United States
Slack Technologies | Communication | United States
Outreach | Customer engagement | United States
Hugging Face SAS | All of the above | France |
https://huggingface.co/datasets | Edit Datasets filters
Multimodal
Feature Extraction Text-to-Image Image-to-Text Text-to-Video Visual Question Answering Graph Machine Learning
Computer Vision
Depth Estimation Image Classification Object Detection Image Segmentation Image-to-Image Unconditional Image Generation Video Classification Zero-Shot Image Classification
Natural Language Processing
Text Classification Token Classification Table Question Answering Question Answering Zero-Shot Classification Translation Summarization Conversational Text Generation Text2Text Generation Fill-Mask Sentence Similarity Table to Text Multiple Choice Text Retrieval
Audio
Text-to-Speech Automatic Speech Recognition Audio-to-Audio Audio Classification Voice Activity Detection
Tabular
Tabular Classification Tabular Regression Tabular to Text Time Series Forecasting
Reinforcement Learning
Reinforcement Learning Robotics
Datasets
66,034
new Full-text search
vikp/textbook_quality_programming
Viewer • Updated 5 days ago • 262 • 104
emrgnt-cmplxty/sciphi-textbooks-are-all-you-need
Viewer • Updated 3 days ago • 254 • 59
lmsys/lmsys-chat-1m
Preview • Updated about 11 hours ago • 5 • 166
fka/awesome-chatgpt-prompts
Viewer • Updated Mar 7 • 1.92k • 3.5k
Open-Orca/OpenOrca
Viewer • Updated about 19 hours ago • 14.2k • 715
uonlp/CulturaX
Viewer • Updated 8 days ago • 14.4k • 184
meta-math/MetaMathQA
Viewer • Updated 3 days ago • 204 • 47
ShengbinYue/DISC-Law-SFT
Preview • Updated 8 days ago • 3 • 20
taesiri/arxiv_qa
Viewer • Updated 1 minute ago • 97 • 106
fondant-ai/fondant-cc-25m
Viewer • Updated 5 days ago • 10 • 13
knowrohit07/know_sql
Viewer • Updated 13 days ago • 845 • 73
bigcode/the-stack
Viewer • Updated Apr 13 • 1.57k • 506
QingyiSi/Alpaca-CoT
Viewer • Updated 19 days ago • 350 • 482
nampdn-ai/tiny-textbooks
Viewer • Updated 6 days ago • 631 • 37
glaiveai/glaive-code-assistant
Viewer • Updated 6 days ago • 225 • 27
Anthropic/hh-rlhf
Viewer • Updated May 26 • 51k • 680
togethercomputer/RedPajama-Data-1T
Viewer • Updated Jun 30 • 14.2k • 868
b-mc2/sql-create-context
Viewer • Updated 4 days ago • 3.99k • 159
openbmb/UltraFeedback
Viewer • Updated 3 days ago • 33 • 10
tatsu-lab/alpaca
Viewer • Updated May 22 • 33k • 434
anon8231489123/ShareGPT_Vicuna_unfiltered
Viewer • Updated Apr 12 • 493 • 562
databricks/databricks-dolly-15k
Viewer • Updated Jun 30 • 30.2k • 379
tiiuae/falcon-refinedweb
Viewer • Updated Jun 20 • 2.63k • 558
roneneldan/TinyStories
Viewer • Updated Aug 16 • 12.4k • 240
Duxiaoman-DI/FinCorpus
Viewer • Updated 11 days ago • 26 • 18
yahma/alpaca-cleaned
Viewer • Updated Apr 10 • 24.7k • 244
OpenAssistant/oasst1
Viewer • Updated May 2 • 7.59k • 1.04k
allenai/dolma
Preview • Updated Aug 18 • 321k • 329
nickrosh/Evol-Instruct-Code-80k-v1
Viewer • Updated Jul 11 • 646 • 80
LDJnr/LessWrong-Amplify-Instruct
Viewer • Updated 7 days ago • 9 • 9 |
https://huggingface.co/join/discord | |
https://huggingface.co/spaces | Spaces
Discover amazing ML apps made by the community! |
https://huggingface.co/models | Edit Models filters
Multimodal
Feature Extraction Text-to-Image Image-to-Text Text-to-Video Visual Question Answering Document Question Answering Graph Machine Learning
Computer Vision
Depth Estimation Image Classification Object Detection Image Segmentation Image-to-Image Unconditional Image Generation Video Classification Zero-Shot Image Classification
Natural Language Processing
Text Classification Token Classification Table Question Answering Question Answering Zero-Shot Classification Translation Summarization Conversational Text Generation Text2Text Generation Fill-Mask Sentence Similarity
Audio
Text-to-Speech Automatic Speech Recognition Audio-to-Audio Audio Classification Voice Activity Detection
Tabular
Tabular Classification Tabular Regression
Reinforcement Learning
Reinforcement Learning Robotics
Models
347,845
new Full-text search
mistralai/Mistral-7B-v0.1
Text Generation • Updated about 1 hour ago • 37.9k • 655
mistralai/Mistral-7B-Instruct-v0.1
Text Generation • Updated about 9 hours ago • 25.7k • 431
monster-labs/control_v1p_sd15_qrcode_monster
Updated Jul 21 • 377k • 885
TheBloke/Mistral-7B-Instruct-v0.1-GGUF
Text Generation • Updated 5 days ago • 910 • 120
stabilityai/stable-diffusion-xl-base-1.0
Text-to-Image • Updated 1 day ago • 3.55M • 2.95k
stabilityai/stablelm-3b-4e1t
Text Generation • Updated 3 days ago • 4.35k • 104
migtissera/SynthIA-7B-v1.3
Text Generation • Updated 3 days ago • 458 • 88
Qwen/Qwen-14B-Chat
Text Generation • Updated 4 days ago • 4.85k • 149
TheBloke/Mistral-7B-v0.1-GGUF
Text Generation • Updated 5 days ago • 593 • 74
microsoft/phi-1_5
Text Generation • Updated 6 days ago • 168k • 869
Open-Orca/Mistral-7B-OpenOrca
Text Generation • Updated about 8 hours ago • 29.3k • 73
meta-llama/Llama-2-7b-chat-hf
Text Generation • Updated Aug 9 • 1.09M • 1.32k
meta-llama/Llama-2-7b
Text Generation • Updated Jul 19 • 2.65k
ostris/ikea-instructions-lora-sdxl
Text-to-Image • Updated 4 days ago • 2.31k • 57
lllyasviel/sd_control_collection
Updated 24 days ago • 500
pfnet/plamo-13b
Text Generation • Updated 6 days ago • 6.7k • 55
Qwen/Qwen-14B
Text Generation • Updated 4 days ago • 3.05k • 122
aipicasso/emi
Text-to-Image • Updated 7 days ago • 3.25k • 57
runwayml/stable-diffusion-v1-5
Text-to-Image • Updated Aug 23 • 8.98M • 9.32k
TheBloke/Mistral-7B-OpenOrca-GGUF
Text Generation • Updated about 15 hours ago • 34 • 39
Nexusflow/NexusRaven-13B
Text Generation • Updated 2 days ago • 1.36k • 36
openai/whisper-large-v2
Automatic Speech Recognition • Updated 25 days ago • 154k • 1.14k
uwg/upscaler
Updated Aug 17 • 241
stabilityai/stable-diffusion-xl-refiner-1.0
Image-to-Image • Updated 8 days ago • 2.32M • 912
Phind/Phind-CodeLlama-34B-v2
Text Generation • Updated Aug 28 • 27.4k • 333
tiiuae/falcon-180B
Text Generation • Updated 27 days ago • 64.2k • 800
meta-llama/Llama-2-70b-chat-hf
Text Generation • Updated Aug 9 • 141k • 1.39k
meta-llama/Llama-2-7b-hf
Text Generation • Updated Aug 9 • 563k • 627
Xwin-LM/Xwin-LM-70B-V0.1
Text Generation • Updated 12 days ago • 4.63k • 147
Qwen/Qwen-14B-Chat-Int4
Text Generation • Updated 4 days ago • 2.84k • 47 |
https://huggingface.co/docs | Hugging Face
Documentations
Community
Blog
Learn
Discord
Forum
Github |
https://huggingface.co/login | Log In
Don't have an account? Sign Up
Username or Email address Password
Forgot your password?
SSO is available for companies |
https://huggingface.co/join | Join Hugging Face
Join the community of machine learners!
Email Address
Hint: Use your organization email to easily find and join your company/team org.
Password
Already have an account? Log in
SSO is available for companies |
https://huggingface.co/pricing | Users and organizations already use the Hub as a collaboration platform,
we’re making it easy to seamlessly and scalably launch ML compute directly from the Hub.
The HF Hub is the central place to explore, experiment, collaborate and build technology with Machine Learning.
Join the open source Machine Learning movement!
→ Sign Up
Spaces Hardware
Starting at $0
Spaces are one of the most popular ways to share ML applications and demos with the world.
Upgrade your Spaces with our selection of custom on-demand hardware:
→ Get started with Spaces
Name CPU Memory GPU GPU memory Hourly price
CPU Basic 2 vCPU 16 GB - - $0.00
CPU Upgrade 8 vCPU 32 GB - - $0.03
Nvidia T4 - small 4 vCPU 15 GB Nvidia T4 16GB $0.60
Nvidia T4 - medium 8 vCPU 30 GB Nvidia T4 16GB $0.90
Nvidia A10G - small 4 vCPU 15 GB Nvidia A10G 24GB $1.05
Nvidia A10G - large 12 vCPU 46 GB Nvidia A10G 24GB $3.15
Nvidia A100 - large 12 vCPU 142 GB Nvidia A100 40GB $4.13
Custom on demand on demand on demand on demand on demand
Spaces Persistent Storage
All Spaces get ephemeral storage for free but you can upgrade and add persistent storage at any time.
Name Storage Monthly price
Small 20 GB $5
Medium 150 GB $25
Large 1 TB $100
Building something cool as a side project? We also offer community GPU grants.
Inference Endpoints
Starting at $0.06/hour
Inference Endpoints offers a secure production solution to easily deploy any ML model on dedicated and autoscaling infrastructure, right from the HF Hub.
→Learn more
CPU instances
Provider Architecture vCPUs Memory Hourly rate
aws Intel Xeon - Ice Lake 1 2GB $0.06
aws Intel Xeon - Ice Lake 2 4GB $0.12
aws Intel Xeon - Ice Lake 4 8GB $0.24
aws Intel Xeon - Ice Lake 8 16GB $0.48
azure Intel Xeon 1 2GB $0.06
azure Intel Xeon 2 4GB $0.12
azure Intel Xeon 4 8GB $0.24
azure Intel Xeon 8 16GB $0.48
GPU instances
Provider Architecture GPUs Memory Hourly rate
aws NVIDIA T4 1 14GB $0.60
aws NVIDIA A10G 1 24GB $1.30
aws NVIDIA T4 4 56GB $4.50
aws NVIDIA A100 1 80GB $6.50
aws NVIDIA A100 2 160GB $13.00
aws NVIDIA A100 4 320GB $26.00
aws NVIDIA A10G 4 96GB Enterprise
aws NVIDIA A100 8 640GB Enterprise
Create powerful AI models without code. AutoTrain is a new way to automatically train, evaluate and deploy state-of-the-art Machine Learning models by simply uploading data. Estimated costs are provided before training starts!
→ Start your first training
Tasks available in AutoTrain:
Image classfication
Text Classification
Token Classification
Question Answering (extractive)
Translation
Summarization
Text Regression
Tabular Data (Classification and Regression)
FreePRO accountPay as you go (unlimited)
Image tasks Up to 500 images Up to 1500 images Cost available before training
NLP & tabular tasks Up to 3,000 rows Up to 5,000 rows Cost available before training
Models trained Up to 1 models Up to 1 models Cost available before training |
https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster | Controlnet QR Code Monster v2 For SD-1.5
Model Description
This model is made to generate creative QR codes that still scan. Keep in mind that not all generated codes might be readable, but you can try different parameters and prompts to get the desired results.
NEW VERSION
Introducing the upgraded version of our model - Controlnet QR code Monster v2. V2 is a huge upgrade over v1, for scannability AND creativity.
QR codes can now seamlessly blend the image by using a gray-colored background (#808080).
As with the former version, the readability of some generated codes may vary, however playing around with parameters and prompts could yield better results.
You can find in in the v2/ subfolder.
How to Use
Condition: QR codes are passed as condition images with a module size of 16px. Use a higher error correction level to make it easier to read (sometimes a lower level can be easier to read if smaller in size). Use a gray background for the rest of the image to make the code integrate better.
Prompts: Use a prompt to guide the QR code generation. The output will highly depend on the given prompt. Some seem to be really easily accepted by the qr code process, some will require careful tweaking to get good results.
Controlnet guidance scale: Set the controlnet guidance scale value:
High values: The generated QR code will be more readable.
Low values: The generated QR code will be more creative.
Tips
For an optimally readable output, try generating multiple QR codes with similar parameters, then choose the best ones.
Use the Image-to-Image feature to improve the readability of a generated QR code:
Decrease the denoising strength to retain more of the original image.
Increase the controlnet guidance scale value for better readability. A typical workflow for "saving" a code would be : Max out the guidance scale and minimize the denoising strength, then bump the strength until the code scans.
Example Outputs
Here are some examples of creative, yet scannable QR codes produced by our model:
Feel free to experiment with prompts, parameters, and the Image-to-Image feature to achieve the desired QR code output. Good luck and have fun! |
https://huggingface.co/mistralai/Mistral-7B-v0.1 | Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our Release blog post
Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
Grouped-Query Attention
Sliding-Window Attention
Byte-fallback BPE tokenizer
Troubleshooting
If you see the following error:
Traceback (most recent call last): File "", line 1, in File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/transformers/models/auto/configuration_auto.py", line 723, in getitem raise KeyError(key) KeyError: 'mistral'
Installing transformers from source should solve the issue:
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
https://huggingface.co/spaces/AP123/IllusionDiffusion | App Files Files
Community
304 |
https://huggingface.co/spaces/Shopify/background-replacement | App Files Files
Community
3 |
https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory | App Files Files
Community
225 |
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0 | SD-XL 1.0-base Model Card
Model
SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps. Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows: First, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img") to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
Model Description
Developed by: Stability AI
Model type: Diffusion-based text-to-image generative model
License: CreativeML Open RAIL++-M License
Model Description: This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
Resources for more information: Check out our GitHub Repository and the SDXL report on arXiv.
Model Sources
For research purposes, we recommend our generative-models Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time. Clipdrop provides free SDXL inference.
Repository: https://github.com/Stability-AI/generative-models
Demo: https://clipdrop.co/stable-diffusion
Evaluation
The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
🧨 Diffusers
Make sure to upgrade diffusers to >= 0.19.0:
pip install diffusers --upgrade
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
pip install invisible_watermark transformers accelerate safetensors
To just use the base model, you can run:
from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16") pipe.to("cuda") # if using torch < 2.0 # pipe.enable_xformers_memory_efficient_attention() prompt = "An astronaut riding a green horse" images = pipe(prompt=prompt).images[0]
To use the whole base + refiner pipeline as an ensemble of experts you can run:
from diffusers import DiffusionPipeline import torch # load both base & refiner base = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) base.to("cuda") refiner = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-refiner-1.0", text_encoder_2=base.text_encoder_2, vae=base.vae, torch_dtype=torch.float16, use_safetensors=True, variant="fp16", ) refiner.to("cuda") # Define how many steps and what % of steps to be run on each experts (80/20) here n_steps = 40 high_noise_frac = 0.8 prompt = "A majestic lion jumping from a big stone at night" # run both experts image = base( prompt=prompt, num_inference_steps=n_steps, denoising_end=high_noise_frac, output_type="latent", ).images image = refiner( prompt=prompt, num_inference_steps=n_steps, denoising_start=high_noise_frac, image=image, ).images[0]
When using torch >= 2.0, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
If you are limited by GPU VRAM, you can enable cpu offloading by calling pipe.enable_model_cpu_offload instead of .to("cuda"):
- pipe.to("cuda") + pipe.enable_model_cpu_offload()
For more information on how to use Stable Diffusion XL with diffusers, please have a look at the Stable Diffusion XL Docs.
Optimum
Optimum provides a Stable Diffusion pipeline compatible with both OpenVINO and ONNX Runtime.
OpenVINO
To install Optimum with the dependencies required for OpenVINO :
pip install optimum[openvino]
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace StableDiffusionXLPipeline with Optimum OVStableDiffusionXLPipeline. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set export=True.
- from diffusers import StableDiffusionPipeline + from optimum.intel import OVStableDiffusionPipeline model_id = "stabilityai/stable-diffusion-xl-base-1.0" - pipeline = StableDiffusionPipeline.from_pretrained(model_id) + pipeline = OVStableDiffusionPipeline.from_pretrained(model_id) prompt = "A majestic lion jumping from a big stone at night" image = pipeline(prompt).images[0]
You can find more examples (such as static reshaping and model compilation) in optimum documentation.
ONNX
To install Optimum with the dependencies required for ONNX Runtime inference :
pip install optimum[onnxruntime]
To load an ONNX model and run inference with ONNX Runtime, you need to replace StableDiffusionXLPipeline with Optimum ORTStableDiffusionXLPipeline. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set export=True.
- from diffusers import StableDiffusionPipeline + from optimum.onnxruntime import ORTStableDiffusionPipeline model_id = "stabilityai/stable-diffusion-xl-base-1.0" - pipeline = StableDiffusionPipeline.from_pretrained(model_id) + pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) prompt = "A majestic lion jumping from a big stone at night" image = pipeline(prompt).images[0]
You can find more examples in optimum documentation.
Uses
Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
Generation of artworks and use in design and other artistic processes.
Applications in educational or creative tools.
Research on generative models.
Safe deployment of models which have the potential to generate harmful content.
Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
Limitations and Bias
Limitations
The model does not achieve perfect photorealism
The model cannot render legible text
The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
Faces and people in general may not be generated properly.
The autoencoding part of the model is lossy.
Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. |
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 | Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our release blog post
Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [\INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]"
This format is available as a chat template via the apply_chat_template() method:
from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0])
Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
Grouped-Query Attention
Sliding-Window Attention
Byte-fallback BPE tokenizer
Troubleshooting
If you see the following error:
Traceback (most recent call last): File "", line 1, in File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/transformers/models/auto/configuration_auto.py", line 723, in getitem raise KeyError(key) KeyError: 'mistral'
Installing transformers from source should solve the issue pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF | Mistral 7B Instruct v0.1 - GGUF
Model creator: Mistral AI
Original model: Mistral 7B Instruct v0.1
Description
This repo contains GGUF format model files for Mistral AI's Mistral 7B Instruct v0.1.
About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
llama.cpp. The source project for GGUF. Offers a CLI and a server option.
text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
Repositories available
AWQ model(s) for GPU inference.
GPTQ models for GPU inference, with multiple quantisation parameter options.
2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference
Mistral AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template: Mistral
<s>[INST] {prompt} [/INST]
Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit d0cee0d
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
Sequence length note: The model will work at sequence lengths of 4096, or lower. GGUF does not yet have support for the new sliding window sequence length mode, so longer sequence lengths are not supported.
Explanation of quantisation methods
Click to see details
The new methods available are:
GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
Provided files
Name Quant method Bits Size Max RAM required Use case
mistral-7b-instruct-v0.1.Q2_K.gguf Q2_K 2 3.08 GB 5.58 GB smallest, significant quality loss - not recommended for most purposes
mistral-7b-instruct-v0.1.Q3_K_S.gguf Q3_K_S 3 3.16 GB 5.66 GB very small, high quality loss
mistral-7b-instruct-v0.1.Q3_K_M.gguf Q3_K_M 3 3.52 GB 6.02 GB very small, high quality loss
mistral-7b-instruct-v0.1.Q3_K_L.gguf Q3_K_L 3 3.82 GB 6.32 GB small, substantial quality loss
mistral-7b-instruct-v0.1.Q4_0.gguf Q4_0 4 4.11 GB 6.61 GB legacy; small, very high quality loss - prefer using Q3_K_M
mistral-7b-instruct-v0.1.Q4_K_S.gguf Q4_K_S 4 4.14 GB 6.64 GB small, greater quality loss
mistral-7b-instruct-v0.1.Q4_K_M.gguf Q4_K_M 4 4.37 GB 6.87 GB medium, balanced quality - recommended
mistral-7b-instruct-v0.1.Q5_0.gguf Q5_0 5 5.00 GB 7.50 GB legacy; medium, balanced quality - prefer using Q4_K_M
mistral-7b-instruct-v0.1.Q5_K_S.gguf Q5_K_S 5 5.00 GB 7.50 GB large, low quality loss - recommended
mistral-7b-instruct-v0.1.Q5_K_M.gguf Q5_K_M 5 5.13 GB 7.63 GB large, very low quality loss - recommended
mistral-7b-instruct-v0.1.Q6_K.gguf Q6_K 6 5.94 GB 8.44 GB very large, extremely low quality loss
mistral-7b-instruct-v0.1.Q8_0.gguf Q8_0 8 7.70 GB 10.20 GB very large, extremely low quality loss - not recommended
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
How to download GGUF files
Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
LM Studio
LoLLMS Web UI
Faraday.dev
In text-generation-webui
Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-Instruct-v0.1-GGUF and below it, a specific filename to download, such as: mistral-7b-instruct-v0.1.Q4_K_M.gguf.
Then click Download.
On the command line, including multiple files at once
I recommend using the huggingface-hub Python library:
pip3 install huggingface-hub
Then you can download any individual model file to the current directory, at high speed, with a command like this:
huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF mistral-7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
More advanced huggingface-cli download usage
You can also download multiple files at once with a pattern:
huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
For more documentation on downloading with huggingface-cli, please see: HF -> Hub Python Library -> Download files -> Download from the CLI.
To accelerate downloads on fast connections (1Gbit/s or higher), install hf_transfer:
pip3 install hf_transfer
And set environment variable HF_HUB_ENABLE_HF_TRANSFER to 1:
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.1-GGUF mistral-7b-instruct-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
Windows Command Line users: You can set the environment variable by running set HF_HUB_ENABLE_HF_TRANSFER=1 before the download command.
Example llama.cpp command
Make sure you are using llama.cpp from commit d0cee0d or later.
./main -ngl 32 -m mistral-7b-instruct-v0.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST]{prompt} [/INST]"
Change -ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Sequence length can be 4096 or lower. Mistral's sliding window sequence length is not yet supported in llama.cpp, so do not use sequence lengths longer than 4096.
If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins
For other parameters and how to use them, please refer to the llama.cpp documentation
How to run in text-generation-webui
Further instructions here: text-generation-webui/docs/llama.cpp.md.
How to run from Python code
You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries.
How to load this model in Python code, using ctransformers
I have not tested ctransformers with Mistral models. It may work, but will require that you set the model_type to llama for now, until ctransformers updates with specific support.
First install the package
Run one of the following commands, according to your system:
# Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers
Simple ctransformers example code
from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-Instruct-v0.1-GGUF", model_file="mistral-7b-instruct-v0.1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to"))
How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
LangChain + llama-cpp-python
LangChain + ctransformers
Discord
For further support, and discussions on these models and AI in general, join us at:
TheBloke AI's Discord server
Thanks, and how to contribute
Thanks to the chirper.ai team!
Thanks to Clay from gpus.llm-utils.org!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
Patreon: https://patreon.com/TheBlokeAI
Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Aemon Algiz.
Patreon special mentions: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
Original model card: Mistral AI's Mistral 7B Instruct v0.1
Model Card for Mistral-7B-Instruct-v0.1
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.
For full details of this model please read our release blog post
Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [\INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") text = "<s>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " "[INST] Do you have mayonnaise recipes? [/INST]" encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False) model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0])
Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
Grouped-Query Attention
Sliding-Window Attention
Byte-fallback BPE tokenizer
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
https://huggingface.co/spaces/facebook/MusicGen | App Files Files
Community
51 |
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard | App Files Files
Community
303 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 34