File size: 1,829 Bytes
17ac2d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
Continued pretrained from the nb-roberta-base.

The domain specific pretraining is done on the 102GB (Scandinavian corpus)[https://huggingface.co/datasets/NbAiLab/scandinavian].

## Train for 180k steps for 128 sequences:
```bash
./run_mlm_flax_stream.py \
    --output_dir="./" \
    --model_type="roberta" \
    --config_name="./" \
    --tokenizer_name="./" \
    --model_name_or_path="./" \
    --dataset_name="NbAiLab/scandinavian" \
    --max_seq_length="128" \
    --weight_decay="0.01" \
    --per_device_train_batch_size="128" \
    --per_device_eval_batch_size="128" \
    --learning_rate="6e-5" \
    --warmup_steps="5000" \
    --overwrite_output_dir \
    --cache_dir /mnt/disks/flaxdisk/cache/ \
    --num_train_steps="180000" \
    --adam_beta1="0.9" \
    --adam_beta2="0.98" \
    --logging_steps="10000" \
    --save_steps="10000" \
    --eval_steps="10000" \
    --preprocessing_num_workers 96 \
    --auth_token True \
    --adafactor \
    --push_to_hub
```
## Train for 20k steps for 512 sequences:
```bash
./run_mlm_flax_stream.py \
    --output_dir="./" \
    --model_type="roberta" \
    --config_name="./" \
    --tokenizer_name="./" \
    --model_name_or_path="./" \
    --dataset_name="NbAiLab/scandinavian" \
    --max_seq_length="512" \
    --weight_decay="0.01" \
    --per_device_train_batch_size="48" \
    --per_device_eval_batch_size="48" \
    --learning_rate="3e-5" \
    --warmup_steps="5000" \
    --overwrite_output_dir \
    --cache_dir /mnt/disks/flaxdisk/cache/ \
    --num_train_steps="20000" \
    --adam_beta1="0.9" \
    --adam_beta2="0.98" \
    --logging_steps="20000" \
    --save_steps="10000" \
    --eval_steps="10000" \
    --preprocessing_num_workers 96 \
    --auth_token True \
    --adafactor \
    --push_to_hub
```



Approximate additional training time: 1 week.