--- language: de license: mit datasets: cc100 --- # xlm-roberta-base-focus-extend-german XLM-R adapted to German using "FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models". Code: https://github.com/konstantinjdobler/focus Paper: https://arxiv.org/abs/2305.14481 ## Usage ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-german") model = AutoModelForMaskedLM.from_pretrained("konstantindobler/xlm-roberta-base-focus-extend-german") # Use model and tokenizer as usual ``` ## Details The model is based on [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) and was adapted to German. The original multilingual tokenizer was extended with the top 30k tokens of a language-specific German tokenizer. The new embeddings were initialized with FOCUS. The model was then trained on data from CC100 for 390k optimizer steps. More details and hyperparameters can be found [in the paper](https://arxiv.org/abs/2305.14481). ## Disclaimer The web-scale dataset used for pretraining and tokenizer training (CC100) might contain personal and sensitive information. Such behavior needs to be assessed carefully before any real-world deployment of the models. ## Citation Please cite FOCUS as follows: ```bibtex @misc{dobler-demelo-2023-focus, title={FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models}, author={Konstantin Dobler and Gerard de Melo}, year={2023}, eprint={2305.14481}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```