Core implementation of Jina XLM-RoBERTa
This implementation is adapted from XLM-Roberta. In contrast to the original implementation, this model uses Rotary positional encodings and supports flash-attention 2.
Models that use this implementation
Converting weights
Weights from an original XLMRoberta model can be converted using the convert_roberta_weights_to_flash.py
script in the model repository.