Edit model card


The models are trained on:

  • Government Text
  • Swedish Literature
  • Swedish News

Corpus size: Roughly 6B tokens.

The following models are currently available:

  • bertsson - A BERT base model trained with the same hyperparameters as first published by Google.

All models are cased and trained with whole word masking.

Stay tuned for evaluations.

Downloads last month
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .