Edit model card

Training procedure

We finetuned mistralai/Mistral-7B-v0.1 on databricks/databricks-dolly-15k Dataset for 1 epoch using MonsterAPI no-code LLM finetuner.

Finetuning with MonsterAPI no-code LLM Finetuner in 5 easy steps:

  1. Select an LLM: Mistral 7B v0.1
  2. Select a task and Dataset: Instruction Finetuning and databricks-dolly-15k Dataset
  3. Specify Hyperparameters: We used default values suggested by finetuner
  4. Review and submit the job: That's it!

Hyperparameters & Run details:

  • Model: mistralai/Mistral-7B-v0.1
  • Dataset: databricks/databricks-dolly-15k
  • Learning rate: 0.0002
  • Number of epochs: 1
  • Cutoff length: 512
  • Data split: Training: 95% / Validation: 5%
  • Gradient accumulation steps: 1

About Model:

The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on majority of the benchmarks as tested by Mistral team.

About Dataset:

databricks-dolly-15k is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language models to exhibit the magical interactivity of ChatGPT.

Framework versions

  • PEFT 0.5.0
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train monsterapi/Mistral-7B-v0.1-Dolly-15k