Back to all models
fill-mask mask_token: <mask>
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
Share Copied link to clipboard

Monthly model downloads

jplu/tf-xlm-roberta-large jplu/tf-xlm-roberta-large
last 30 days



Contributed by

jplu Julien Plu
8 models

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, TFAutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jplu/tf-xlm-roberta-large") model = TFAutoModelWithLMHead.from_pretrained("jplu/tf-xlm-roberta-large")

Tensorflow XLM-RoBERTa

In this repository you will find different versions of the XLM-RoBERTa model for Tensorflow.


XLM-RoBERTa is a scaled cross lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross lingual benchmarks.

Model Weights

Model Downloads
jplu/tf-xlm-roberta-base config.json • tf_model.h5
jplu/tf-xlm-roberta-large config.json • tf_model.h5


With Transformers >= 2.4 the Tensorflow models of XLM-RoBERTa can be loaded like:

from transformers import TFXLMRobertaModel

model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-base")


model = TFXLMRobertaModel.from_pretrained("jplu/tf-xlm-roberta-large")

Huggingface model hub

All models are available on the Huggingface model hub.


Thanks to all the Huggingface team for the support and their amazing library!