Update README.md
Browse files
README.md
CHANGED
@@ -1,22 +1,42 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
-
#
|
5 |
-
Trained using code from [ComsmoPedia[https://github.com/huggingface/cosmopedia/tree/main/classification], but with the [nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) as starting point. The [data](https://huggingface.co/datasets/north/scandinavian-llama3-annotations) used in classification is from [GlotCC](https://huggingface.co/datasets/cis-lmu/GlotCC-V1) and have been annotated using Gemini 1.5 Flash.
|
6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
The following command where used for training. Please note that `train_regressor_bert.py` has a few minor changes to the original `train_edu_bert.py`:
|
8 |
```
|
9 |
-
python train_regressor_bert.py --base_model_name="NbAiLab/nb-bert-base" --dataset_name="
|
10 |
```
|
11 |
|
12 |
-
|
13 |
```
|
14 |
-
python eval_regressor_bert.py --checkpoint_dir="/
|
15 |
```
|
16 |
|
17 |
-
For convenience we also provide the `run_regressor_bert.py` script. This is also based on `run_edu_bert.py` from Cosmopedia. You can modify this script to annotate HuggingFace datasets directly. Cosmopedia also provides slurm-scripts here. We have not included these since we have had the opportunity to test them.
|
18 |
-
|
19 |
-
|
20 |
|
21 |
## Classification Report
|
22 |
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
# NB Linguistic Quality Regressor
|
|
|
5 |
|
6 |
+
## Introduction
|
7 |
+
|
8 |
+
This model is designed to rate the quality of Norwegian training corpora based on **linguistic quality**. It predicts a continuous score (float from 0 to 5), assessing the linguistic quality of Norwegian texts. The model is inspired by the classifiers used in the FineWeb project and is trained mainly on Norwegian content.
|
9 |
+
|
10 |
+
## Model Architecture
|
11 |
+
|
12 |
+
It is trained on top of the [nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) model and utilizes code from [CosmoPedia](https://github.com/huggingface/cosmopedia/tree/main/classification).
|
13 |
+
|
14 |
+
## Training Data
|
15 |
+
|
16 |
+
The dataset used for training is derived from [GlotCC](https://huggingface.co/datasets/cis-lmu/GlotCC-V1) and has been annotated using Gemini 1.5 Flash.
|
17 |
+
|
18 |
+
## Purpose
|
19 |
+
|
20 |
+
The performance of large language models (LLMs) heavily depends on the quality and size of their pretraining datasets. This regressor aims to assess and enhance the linguistic quality of Norwegian textual data, contributing to better-performing Norwegian LLMs.
|
21 |
+
|
22 |
+
This model is part of a pair; the other is the [NB Education Quality Regressor](https://huggingface.co/NbAiLab/nb-education-quality-regressor), which focuses on educational content.
|
23 |
+
|
24 |
+
|
25 |
+
## Using the Model
|
26 |
+
For convenience we also provide the `run_regressor_bert.py` script. This is also based on `run_edu_bert.py` from Cosmopedia. You can modify this script to annotate HuggingFace datasets directly. Cosmopedia also provides slurm-scripts here. We have not included these since we have had the opportunity to test them.
|
27 |
+
|
28 |
+
|
29 |
+
## Training and Evaluation Procedure
|
30 |
The following command where used for training. Please note that `train_regressor_bert.py` has a few minor changes to the original `train_edu_bert.py`:
|
31 |
```
|
32 |
+
python train_regressor_bert.py --base_model_name="NbAiLab/nb-bert-base" --dataset_name="user/linguistic-annotations" --target_column="score" --checkpoint_dir="/home/pere/checkpoints/scandinavian_bert/"
|
33 |
```
|
34 |
|
35 |
+
The following script where used for evaluation.
|
36 |
```
|
37 |
+
python eval_regressor_bert.py --checkpoint_dir="/user/pere/checkpoints/scandinavian_bert/final/" --dataset_name="user/linguistic-annotations"
|
38 |
```
|
39 |
|
|
|
|
|
|
|
40 |
|
41 |
## Classification Report
|
42 |
|