language: sv | |
# BERTSSON Models | |
The models are trained on: | |
- Government Text | |
- Swedish Literature | |
- Swedish News | |
Corpus size: Roughly 6B tokens. | |
The following models are currently available: | |
- **bertsson** - A BERT base model trained with the same hyperparameters as first published by Google. | |
All models are cased and trained with whole word masking. | |
Stay tuned for evaluations. | |