Update README.md
Browse files
README.md
CHANGED
|
@@ -20,10 +20,12 @@ We expect this line of work to help businesses, government institutions and citi
|
|
| 20 |
- π³π΄ MykMaks/NorwegianDataset-compressed
|
| 21 |
- π©π° MykMaks/fmsudgivelser
|
| 22 |
- πΎ Training code
|
|
|
|
| 23 |
- MM checkpoints: https://github.com/Mikeriess/llama33_resources/tree/MM-models
|
| 24 |
- V-I checkpoints: https://github.com/Mikeriess/llama33_resources/tree/v-i-models
|
| 25 |
- π€ Model LORA-adapter checkpoints for Llama-3.2-11B-Vision-Instruct
|
| 26 |
- The model is iteratively trained over all datasets:
|
| 27 |
- The suffix of each file denotes the order of the checkpoint, along with the dataset that it was fine-tuned on
|
|
|
|
| 28 |
- πΈ Final merged model:
|
| 29 |
- <b>Llama-3.2-11B-Vision-Instruct-MykMaks</b>
|
|
|
|
| 20 |
- π³π΄ MykMaks/NorwegianDataset-compressed
|
| 21 |
- π©π° MykMaks/fmsudgivelser
|
| 22 |
- πΎ Training code
|
| 23 |
+
- Approach: We trained every epoch with a different prompt, stored the adapter as a checkpoint and continued to next prompt-dataset pair.
|
| 24 |
- MM checkpoints: https://github.com/Mikeriess/llama33_resources/tree/MM-models
|
| 25 |
- V-I checkpoints: https://github.com/Mikeriess/llama33_resources/tree/v-i-models
|
| 26 |
- π€ Model LORA-adapter checkpoints for Llama-3.2-11B-Vision-Instruct
|
| 27 |
- The model is iteratively trained over all datasets:
|
| 28 |
- The suffix of each file denotes the order of the checkpoint, along with the dataset that it was fine-tuned on
|
| 29 |
+
- Prompts can be tracked in the respective experiment.json files in the MM and V-I code repositories
|
| 30 |
- πΈ Final merged model:
|
| 31 |
- <b>Llama-3.2-11B-Vision-Instruct-MykMaks</b>
|