File size: 856 Bytes
28afba2
 
 
 
 
 
 
 
 
b0abc3f
 
 
28afba2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
library_name: peft
tags:
- llama1-7b
- code
- instruct
- alpaca-instruct
- alpaca
- llama7b
datasets:
- tatsu-lab/alpaca
base_model: decapoda-research/llama-7b-hf
---

We finetuned huggyllama/llama-7b on tatsu-lab/alpaca Dataset for 5 epochs or ~ 25,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).

This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment. 

The finetuning session got completed in 4  hours and costed us only `$16` for the entire finetuning run!

#### Hyperparameters & Run details:
- Model Path: huggyllama/llama-7b
- Dataset: tatsu-lab/alpaca
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1

license: apache-2.0
---