osma commited on
Commit
c317c13
·
verified ·
1 Parent(s): 9240861

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -81,11 +81,11 @@ See the [demo notebook](https://github.com/NatLibFi/FinGreyLit/blob/main/experim
81
 
82
  ### Training Data
83
 
84
- The model was fine-tuned on 520 training documents from the [FinGreyLit](https://github.com/NatLibFi/FinGreyLit) data set.
85
 
86
  ### Training Procedure
87
 
88
- The model was fine-tuned in the [University of Helsinki HPC environment](https://helpdesk.it.helsinki.fi/en/services/scientific-computing-services-hpc) on a single A100 GPU using the Axolotl tool and the QLoRA method.
89
  See the [notebook used for training](https://github.com/NatLibFi/FinGreyLit/blob/main/experiments/axolotl-finetune-llm/Axolotl-fine-tune-Qwen2-0.5B.ipynb) for details such as hyperparameters.
90
 
91
  ## Evaluation
 
81
 
82
  ### Training Data
83
 
84
+ The model was fine-tuned on 620 training documents from the [FinGreyLit](https://github.com/NatLibFi/FinGreyLit) data set.
85
 
86
  ### Training Procedure
87
 
88
+ The model was fine-tuned in the [University of Helsinki HPC environment](https://helpdesk.it.helsinki.fi/en/services/scientific-computing-services-hpc) on a single A100 GPU using the Axolotl tool and the LoRA method.
89
  See the [notebook used for training](https://github.com/NatLibFi/FinGreyLit/blob/main/experiments/axolotl-finetune-llm/Axolotl-fine-tune-Qwen2-0.5B.ipynb) for details such as hyperparameters.
90
 
91
  ## Evaluation