PEFT
code
instruct
falcon
File size: 1,442 Bytes
63b2353
 
8a9c3ef
 
 
 
 
 
 
 
63b2353
 
8a9c3ef
63b2353
8a9c3ef
63b2353
8a9c3ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
library_name: peft
tags:
- code
- instruct
- falcon
datasets:
- garage-bAInd/Open-Platypus
base_model: tiiuae/falcon-7b
license: apache-2.0
---

### Finetuning Overview:

**Model Used:** tiiuae/falcon-7b 

**Dataset:** garage-bAInd/Open-Platypus  

#### Dataset Insights:

[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/HuggingFaceH4/no_robots) dataset is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. It is comprised of the following datasets, which were filtered using keyword search and then Sentence Transformers to remove questions with a similarity above 80%

#### Finetuning Details:

With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning:

- Was achieved with great cost-effectiveness.
- Completed in a total duration of 1h 39m 17s for 1 epoch using an A6000 48GB GPU.
- Costed `$3.33` for the entire epoch.

#### Hyperparameters & Additional Details:

- **Epochs:** 1
- **Cost Per Epoch:** $3.33
- **Total Finetuning Cost:** $3.33
- **Model Path:** tiiuae/falcon-7b
- **Learning Rate:** 0.0002
- **Data Split:** 100% train
- **Gradient Accumulation Steps:** 4
- **lora r:** 32
- **lora alpha:** 64

#### Train loss :

![training loss](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/u-ez_dJwMI8_e1dQqRP3U.png)

license: apache-2.0