Text Generation
Transformers
PyTorch
English
Finnish
llama
text-generation-inference
unsloth
trl
conversational
Inference Endpoints
Edit model card

(Updated to 1000th step) So this is only the 1000th step (out of 3922) trained on Google Colab because I'm a little low on money but at least that's free.. While testing the LoRA it seems to perform fairly well. The only real issue with this base model is that it only has 2048 token context size.

The trained formatting should be ChatML but it seemed to work better with Mistral's formatting for some reason (could be just due to me not having merged the model yet).

Dataset used was a mix of these:

LumiOpen/instruction-collection-fin

Gryphe/Sonnet3.5-SlimOrcaDedupCleaned

LoRA: mpasila/Ahma-SlimInstruct-LoRA-V0.1-7B

Uploaded Ahma-SlimInstruct-LoRA-V0.1-7B model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : Finnish-NLP/Ahma-7B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mpasila/Ahma-SlimInstruct-V0.1-7B

Finetuned
(1)
this model
Quantizations
2 models

Datasets used to train mpasila/Ahma-SlimInstruct-V0.1-7B