Edit model card

Quickstart

Release 2.0 (February 7, 2022)

Please check also our newer models: NorBERT 3 family, trained with a better architecture.

Trained on the very large corpus of Norwegian (C4 + NCC, about 15 billion word tokens). Features a 50 000 words vocabulary and was trained using Whole Word Masking.

Download the model here:

  • Cased Norwegian BERT Base 2.0 (NorBERT 2): 221.zip

More about NorBERT training corpora, training procedure and evaluation benchmarks: http://norlm.nlpl.eu/

Associated code: https://github.com/ltgoslo/NorBERT

Check this paper for more details:

Andrey Kutuzov, Jeremy Barnes, Erik Velldal, Lilja Øvrelid, Stephan Oepen. Large-Scale Contextualised Language Modelling for Norwegian, NoDaLiDa'21 (2021)

NorBERT was trained as a part of NorLM, a joint initiative of the projects EOSC-Nordic (European Open Science Cloud), coordinated by the Language Technology Group (LTG) at the University of Oslo.

The computations were performed on resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway.

NorBERT-3

In 2023, we released a new family of NorBERT-3 language models for Norwegian. In general, we now recommend using these models:

NorBERT-3 is described in detail in this paper: NorBench – A Benchmark for Norwegian Language Models (Samuel et al., NoDaLiDa 2023)

Downloads last month
140
Safetensors
Model size
125M params
Tensor type
I64
·
F32
·

Collection including ltg/norbert2