File size: 4,269 Bytes
f983bf7 5b093b2 f983bf7 5b093b2 20f7ce5 46538fe 5b093b2 216aa65 00874a8 216aa65 00874a8 216aa65 5b093b2 c0a8f01 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
license: cc-by-nc-4.0
language:
- de
- fr
- it
- rm
- multilingual
inference: false
---
SwissBERT is a masked language model for processing Switzerland-related text. It has been trained on more than 21 million Swiss news articles retrieved from [Swissdox@LiRI](https://t.uzh.ch/1hI).
<img src="https://vamvas.ch/assets/swissbert/swissbert-diagram.png" alt="SwissBERT is a transformer encoder with language adapters in each layer. There is an adapter for each national language of Switzerland. The other parameters in the model are shared among the four languages." width="450" style="max-width: 100%;">
SwissBERT is based on [X-MOD](https://huggingface.co/facebook/xmod-base), which has been pre-trained with language adapters in 81 languages.
For SwissBERT we trained adapters for the national languages of Switzerland – German, French, Italian, and Romansh Grischun.
In addition, we used a Switzerland-specific subword vocabulary.
The pre-training code and usage examples are available [here](https://github.com/ZurichNLP/swissbert). We also release a version that was fine-tuned on named entity recognition (NER): https://huggingface.co/ZurichNLP/swissbert-ner
## Languages
SwissBERT contains the following language adapters:
| lang_id (Adapter index) | Language code | Language |
|-------------------------|---------------|-----------------------|
| 0 | `de_CH` | Swiss Standard German |
| 1 | `fr_CH` | French |
| 2 | `it_CH` | Italian |
| 3 | `rm_CH` | Romansh Grischun |
## License
Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
## Usage (masked language modeling)
```python
from transformers import pipeline
fill_mask = pipeline(model="ZurichNLP/swissbert")
```
### German example
```python
fill_mask.model.set_default_language("de_CH")
fill_mask("Der schönste Kanton der Schweiz ist <mask>.")
```
Output:
```
[{'score': 0.1373230218887329,
'token': 331,
'token_str': 'Zürich',
'sequence': 'Der schönste Kanton der Schweiz ist Zürich.'},
{'score': 0.08464793860912323,
'token': 5903,
'token_str': 'Appenzell',
'sequence': 'Der schönste Kanton der Schweiz ist Appenzell.'},
{'score': 0.08250337839126587,
'token': 10800,
'token_str': 'Graubünden',
'sequence': 'Der schönste Kanton der Schweiz ist Graubünden.'},
...]
```
### French example
```python
fill_mask.model.set_default_language("fr_CH")
fill_mask("Je m'appelle <mask> Federer.")
```
Output:
```
[{'score': 0.9943694472312927,
'token': 1371,
'token_str': 'Roger',
'sequence': "Je m'appelle Roger Federer."},
...]
```
## Bias, Risks, and Limitations
- SwissBERT is mainly intended for tagging tokens in written text (e.g., named entity recognition, part-of-speech tagging), text classification, and the encoding of words, sentences or documents into fixed-size embeddings.
SwissBERT is not designed for generating text.
- The model was adapted on written news articles and might perform worse on other domains or language varieties.
- While we have removed many author bylines, we did not anonymize the pre-training corpus. The model might have memorized information that has been described in the news but is no longer in the public interest.
## Training Details
- Training data: German, French, Italian and Romansh documents in the [Swissdox@LiRI](https://t.uzh.ch/1hI) database, until 2022.
- Training procedure: Masked language modeling
## Environmental Impact
- Hardware type: RTX 2080 Ti.
- Hours used: 10 epochs × 18 hours × 8 devices = 1440 hours
- Site: Zurich, Switzerland.
- Energy source: 100% hydropower ([source](https://t.uzh.ch/1rU))
- Carbon efficiency: 0.0016 kg CO2e/kWh ([source](https://t.uzh.ch/1rU))
- Carbon emitted: 0.6 kg CO2e ([source](https://mlco2.github.io/impact#compute))
## Citation
```bibtex
@article{vamvas-etal-2023-swissbert,
title={Swiss{BERT}: The Multilingual Language Model for Switzerland},
author={Jannis Vamvas and Johannes Gra\"en and Rico Sennrich},
year={2023},
eprint={2303.13310},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2303.13310}
}
``` |