Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Tensorflow XLM-RoBERTa

In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.


XLM-RoBERTa is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.

Model Weights

Model Downloads
jplu/tf-xlm-roberta-base config.jsontf_model.h5
jplu/tf-xlm-roberta-large config.jsontf_model.h5


With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:

from transformers import TFXLMRobertaModel

model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base")


model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large")

Huggingface model hub

All models are available on the Huggingface model hub.


Thanks to all the Huggingface team for the support and their amazing library!

Downloads last month
Hosted inference API
Mask token: <mask>
This model can be loaded on the Inference API on-demand.