Text Generation
Transformers
PyTorch
English
Finnish
llama
text-generation-inference
unsloth
trl
sft
conversational
Inference Endpoints
mpasila's picture
Create README.md
f3eb8ee verified
|
raw
history blame
1.63 kB
metadata
base_model: Finnish-NLP/Ahma-7B
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
license: apache-2.0
language:
  - en
  - fi
datasets:
  - mpasila/LumiOpenInstruct-GrypheSlimOrca-Mix
  - LumiOpen/instruction-collection-fin
  - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
library_name: peft

So this is only the 500th step (out of 3922) trained on Google Colab because I'm a little low on money but at least that's free.. While testing the LoRA it seems to perform fairly well. The only real issue with this base model is that it only has 2048 token context size.

The trained formatting should be ChatML but it seemed to work better with Mistral's formatting for some reason (could be just due to me not having merged the model yet).

Dataset used was a mix of these:

LumiOpen/instruction-collection-fin

Gryphe/Sonnet3.5-SlimOrcaDedupCleaned

LoRA: mpasila/Ahma-SlimInstruct-LoRA-V0.1-7B

Uploaded Ahma-SlimInstruct-LoRA-V0.1-7B model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : Finnish-NLP/Ahma-7B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.