metadata
library_name: transformers
tags:
- unsloth
- trl
- sft
language:
- en
- fi
- 'no'
- nb
- da
- sv
- is
datasets:
- mpasila/Viking-Instruct-Mix
- saillab/alpaca-icelandic-cleaned
- kobprof/skolegpt-instruct
- tollefj/nor-instruct-cleaned
- skvarre/sv-instruct-v1
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned-20k
- LumiOpen/instruction-collection-fin
- neph1/Alpaca-Lora-GPT4-Swedish-Refined
license: apache-2.0
Trained on all the 6 different languages so it should hopefully be useful for all of them though the quality of the datasets probably vary a lot.
Uses ChatML as usual.
LoRA: mpasila/Viking-SlimInstruct-LoRA-V1-7B
Uses the following datasets:
saillab/alpaca-icelandic-cleaned, kobprof/skolegpt-instruct, tollefj/nor-instruct-cleaned, skvarre/sv-instruct-v1, Gryphe/Sonnet3.5-SlimOrcaDedupCleaned-20k, LumiOpen/instruction-collection-fin, neph1/Alpaca-Lora-GPT4-Swedish-Refined
Uploaded Viking-SlimInstruct-V1-7B model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : LumiOpen/Viking-7B
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.