File size: 997 Bytes
dba9c06
a7580a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dba9c06
a7580a4
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
datasets:
- ewof/code-alpaca-instruct-unfiltered
library_name: peft
tags:
- gpt-j
- gpt-j-6b
- code
- instruct
- instruct-code
- code-alpaca
- alpaca-instruct
- alpaca
- llama7b
- gpt2
---

We finetuned GPT-J 6B on Code-Alpaca-Instruct Dataset (ewof/code-alpaca-instruct-unfiltered) for 5 epochs or ~ 25,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).

This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment. 

The finetuning session got completed in 206 minutes and costed us only `$8` for the entire finetuning run!

#### Hyperparameters & Run details:
- Model Path: EleutherAI/gpt-j-6b
- Dataset: ewof/code-alpaca-instruct-unfiltered
- Learning rate: 0.0003
- Number of epochs: 5
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1

Loss metrics:
![training loss](train-loss.png "Training loss")

---
license: apache-2.0
---