File size: 1,129 Bytes
d20228f
 
4a1dcae
d20228f
 
 
 
 
 
 
bbcc46a
4a1dcae
 
 
d20228f
17d147e
 
4a1dcae
 
 
d20228f
17d147e
 
50b4770
 
d20228f
 
 
 
4a1dcae
d20228f
 
 
4a1dcae
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
language:
- fi
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: LumiOpen/Viking-33B
datasets:
- mpasila/Finnish-ShareGPT-Tiny-V1-1
---
This is a merge of [mpasila/Finnish-Chatty-Tiny-V1-1-33B](https://huggingface.co/mpasila/Finnish-Chatty-Tiny-V1-1-33B).

Uses my [tiny dataset](https://huggingface.co/datasets/mpasila/Finnish-ShareGPT-Tiny-V1-1) to train this bigger variant of Viking model family.

This LoRA uses the 1000B checkpoint.

Trained for 1 epoch with 2048 token context, LoRA Rank 256, Alpha 512.

As a proof of concept it seems to work fairly well. Though I should generate the rest of the dataset which should hopefully work a lot better.

# Uploaded  model

- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** LumiOpen/Viking-33B

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)