# Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-de-75000` This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-de-75000 | |:---------------------------|:--------------------|:--------------------------------------------------| | parameter_size_full | 560,142,482 | 380,767,482 | | parameter_size_embedding | 256,002,048 | 76,802,048 | | vocab_size | 250,002 | 75,002 | | compression_rate_full | 100.0 | 67.98 | | compression_rate_embedding | 100.0 | 30.0 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | de | vocabtrimmer/mc4_validation | text | de | validation | 75000 | 2 |