On a mission to solve NLP,
one commit at a time.
Tech & science
Our science contributions
Weโ€™re on a journey to advance and democratize NLP for everyone. Along the way, we contribute to the development of technology for the better.
Victor Sanh et al.
AAAI 2019

Hierarchical Multi-Task Learning

Our paper has been accepted to AAAI 2019. ๐Ÿ’ฅ We have open-sourced code and demo. ๐ŸŽฎ
30k+ stars on GitHub

A Passionate Community

Our popular State-of-the-art NLP framework. Thousands of developers contribute code and weights.
Write With Transformer
Victor Sanh et al. 2019
Distilllation. A smaller, faster, lighter, cheaper version of BERT.
Code and weights are available through Transformers.
๐ŸŽ ๐ŸŽ ๐ŸŽ
A Transfer Learning approach to Natural Language Generation. A workshop paper on the Transfer Learning approach we used to win the automatic metrics part of the Conversational Intelligence Challenge 2 at NeurIPS 2018.
Meta-learning for language modeling
Thomas Wolf et al.
ICLR 2018

Cutting-edge research

Our workshop paper on Meta-Learning a Dynamical Language Model was accepted to ICLR 2018 ๐Ÿ’ช๐Ÿ’ช. We use our implementation to power ๐Ÿค—.
State-of-the-art coreference resolution


Our coreference resolution module is now the top open source library for coreference. You can train it on your own dataset and language.
State-of-the-art emotion detection
๐Ÿ”ฅ ๐Ÿ˜ฐ ๐Ÿ˜‡ ๐Ÿ˜Š
Major blog posts
We spend a lot of time training models that can barely fit 1-4 samples/GPU. But SGD usually needs more than few samples/batch for decent results. Here is a post gathering practical tips we use, from simple tricks to multi-GPU code & distributed setups.
How you can make your Python NLP module 50-100 times faster by use spaCy's internals and a bit of Cython magic! Comes with a Jupyter notebook with examples processing over 80 millions words per sec!
A post summarizing recent developments in Universal Word/Sentence Embeddings that happened over 2017/early-2018 and future trends. With ELMo, InferSent, Google's Universal Sentence embeddings, learning by multi-tasking...
To introduce the work we presented at ICLR 2018, we drafted a visual & intuitive introduction to Meta-Learning. In this post, we start by explaining whatโ€™s meta-learning in a very visual and intuitive way. Then, we code a meta-learning model in PyTorch and share some of the lessons learned on this project.
View more on Medium ยท Blog ยท Privacy ยท Jobs