PEFT
code
instruct
mistral

Finetuning Overview:

Model Used: mistralai/Mistral-7B-v0.1

Dataset: HuggingFaceH4/no_robots

Dataset Insights:

No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.

Finetuning Details:

With the utilization of MonsterAPI's no-code LLM finetuner, this finetuning:

  • Was achieved with great cost-effectiveness.
  • Completed in a total duration of 1h 15m 3s for 2 epochs using an A6000 48GB GPU.
  • Costed $2.525 for the entire 2 epochs.

Hyperparameters & Additional Details:

  • Epochs: 2
  • Cost Per Epoch: $1.26
  • Total Finetuning Cost: $2.525
  • Model Path: mistralai/Mistral-7B-v0.1
  • Learning Rate: 0.0002
  • Data Split: 100% train
  • Gradient Accumulation Steps: 64
  • lora r: 64
  • lora alpha: 16

Prompt Structure

<|system|> </s> <|user|> [USER PROMPT] </s> <|assistant|> [ASSISTANT ANSWER] </s>

Train loss :

image/png

Benchmarking results :

image/png


license: apache-2.0

Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for monsterapi/mistral_7b_norobots

Adapter
(1191)
this model
Adapters
5 models
Merges
1 model

Dataset used to train monsterapi/mistral_7b_norobots