traintogpb commited on
Commit
1b0fdc6
1 Parent(s): b32f244

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - klue
4
+ - wikipedia
5
+ language:
6
+ - ko
7
+ metrics:
8
+ - accuracy
9
+ training_args:
10
+ - num_train_epochs=5,
11
+ - per_device_train_batch_size=16,
12
+ - per_device_eval_batch_size=16,
13
+ - prediction_loss_only=False,
14
+ - learning_rate=5e-5,
15
+ - logging_strategy='steps',
16
+ - logging_steps=100,
17
+ - save_steps=1000,
18
+ - eval_steps=1000,
19
+ - save_strategy="steps",
20
+ - evaluation_strategy="steps",
21
+ - load_best_model_at_end=True,
22
+ - metric_for_best_model="masked_accuracy",
23
+ - greater_is_better=True,
24
+ - seed=42,
25
+ - warmup_steps=5000,
26
+ info:
27
+ - MLM (15%) from the checkpoint of klue/roberta-large
28
+ - LineByLineTextDataset (block_size 384)
29
+ - PLM for ODQA task based-on Wikipedia questions
30
+ - Accuracy (for [MASK]) = 0.7066 (CE loss 1.388)
31
+ - v2 is trained with smaller learning rate and more epochs
32
+ ---