souvik0306 commited on
Commit
a35dad5
1 Parent(s): 4390f4c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -4
README.md CHANGED
@@ -1,9 +1,52 @@
1
  ---
2
- library_name: peft
 
 
 
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
6
- ### Framework versions
7
 
 
 
8
 
9
- - PEFT 0.5.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ tags:
4
+ - code
5
+ - instruct
6
+ - llama2
7
+ datasets:
8
+ - Zangs3011/no_robots_FalconChatFormated
9
+ base_model: llama/Llama-2-7b-hf
10
+ license: apache-2.0
11
  ---
 
12
 
13
+ ### Finetuning Overview:
14
 
15
+ **Model Used:** llama/Llama-2-7b-hf
16
+ **Dataset:** Zangs3011/no_robots_FalconChatFormated
17
 
18
+ #### Dataset Insights:
19
+
20
+ The WizardLM/WizardLM_evol_instruct_70k dataset, tailored specifically for enhancing interactive capabilities, it was developed using EVOL-Instruct method.Which will basically enhance a smaller dataset, with tougher quesitons for the LLM to perform
21
+ #### Finetuning Details:
22
+
23
+ With the utilization of [MonsterAPI](https://monsterapi.ai)'s [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), this finetuning:
24
+
25
+ - Was achieved with great cost-effectiveness.
26
+ - Completed in a total duration of 39mins 4secs for 1 epoch using an A6000 48GB GPU.
27
+ - Costed `$1.313` for the entire epoch.
28
+
29
+ #### Hyperparameters & Additional Details:
30
+
31
+ - **Epochs:** 1
32
+ - **Cost Per Epoch:** $1.313
33
+ - **Total Finetuning Cost:** $1.313
34
+ - **Model Path:** llama/Llama-2-7b-hf
35
+ - **Learning Rate:** 0.0002
36
+ - **Data Split:** 99% train 1% validation
37
+ - **Gradient Accumulation Steps:** 4
38
+
39
+ ---
40
+ Prompt Structure
41
+ ```
42
+ ### INSTRUCTION:
43
+ [instruction]
44
+
45
+ ### RESPONSE:
46
+ [text]
47
+ ```
48
+ Eval loss :
49
+
50
+ ![eval loss](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/hQM1RVC5_E2z7gImQPqF5.png)
51
+
52
+ license: apache-2.0