oepen commited on
Commit
7d39a24
1 Parent(s): e69657a
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ datasets:
22
 
23
  NorBLOOM-7b-scratch is a large Norwegian language model pretrained from scratch on a total of 260 billion subword tokens (using six repetitions of open Norwegian texts).
24
 
25
- This model is a part of the NORA-LLM family developed in collaboration between [the Language Technology Group at the University of Oslo](https://huggingface.co/ltg), [the High Performance Language Technologies (HPLT) project team](https://hplt-project.org/), [the National Library of Norway](https://huggingface.co/NbAiLab), and [the University of Turku](https://huggingface.co/TurkuNLP).
26
  All the models are pre-trained on the same dataset and with the same tokenizer.
27
  NorBLOOM-7b-scratch has around 7 billion parameters and is based on [the BLOOM architecture](https://arxiv.org/abs/2211.05100).
28
 
 
22
 
23
  NorBLOOM-7b-scratch is a large Norwegian language model pretrained from scratch on a total of 260 billion subword tokens (using six repetitions of open Norwegian texts).
24
 
25
+ This model is a part of the NORA-LLM family developed in collaboration between [the Language Technology Group at the University of Oslo](https://huggingface.co/ltg), [the High Performance Language Technologies (HPLT) project](https://hplt-project.org/), [the National Library of Norway](https://huggingface.co/NbAiLab), and [the University of Turku](https://huggingface.co/TurkuNLP).
26
  All the models are pre-trained on the same dataset and with the same tokenizer.
27
  NorBLOOM-7b-scratch has around 7 billion parameters and is based on [the BLOOM architecture](https://arxiv.org/abs/2211.05100).
28