Update README.md
Browse files
README.md
CHANGED
@@ -81,11 +81,11 @@ See the [demo notebook](https://github.com/NatLibFi/FinGreyLit/blob/main/experim
|
|
81 |
|
82 |
### Training Data
|
83 |
|
84 |
-
The model was fine-tuned on
|
85 |
|
86 |
### Training Procedure
|
87 |
|
88 |
-
The model was fine-tuned in the [University of Helsinki HPC environment](https://helpdesk.it.helsinki.fi/en/services/scientific-computing-services-hpc) on a single A100 GPU using the Axolotl tool and the
|
89 |
See the [notebook used for training](https://github.com/NatLibFi/FinGreyLit/blob/main/experiments/axolotl-finetune-llm/Axolotl-fine-tune-Qwen2-0.5B.ipynb) for details such as hyperparameters.
|
90 |
|
91 |
## Evaluation
|
|
|
81 |
|
82 |
### Training Data
|
83 |
|
84 |
+
The model was fine-tuned on 620 training documents from the [FinGreyLit](https://github.com/NatLibFi/FinGreyLit) data set.
|
85 |
|
86 |
### Training Procedure
|
87 |
|
88 |
+
The model was fine-tuned in the [University of Helsinki HPC environment](https://helpdesk.it.helsinki.fi/en/services/scientific-computing-services-hpc) on a single A100 GPU using the Axolotl tool and the LoRA method.
|
89 |
See the [notebook used for training](https://github.com/NatLibFi/FinGreyLit/blob/main/experiments/axolotl-finetune-llm/Axolotl-fine-tune-Qwen2-0.5B.ipynb) for details such as hyperparameters.
|
90 |
|
91 |
## Evaluation
|