Zangs3011 commited on
Commit
2623960
1 Parent(s): 2766bdf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -26
README.md CHANGED
@@ -11,36 +11,43 @@ datasets:
11
  base_model: meta-llama/Llama-2-70b-hf
12
  ---
13
 
14
- For our finetuning process, we utilized the meta-llama/Llama-2-70b-hf model and the Databricks-dolly-15k dataset.
15
-
16
- This dataset, a meticulous compilation of over 15,000 records, was a result of the dedicated work of thousands of Databricks professionals. It was specifically designed to further improve the interactive capabilities of ChatGPT-like systems.
17
- The dataset contributors crafted prompt / response pairs across eight distinct instruction categories. Besides the seven categories mentioned in the InstructGPT paper, they also ventured into an open-ended, free-form category. The contributors, emphasizing genuine and original content, refrained from sourcing information online, except in special cases where Wikipedia was the source for certain instruction categories. There was also a strict directive against the use of generative AI for crafting instructions or responses.
18
- The contributors could address questions from their peers. Rephrasing the original question was encouraged, and there was a clear preference to answer only those queries they were certain about.
19
- In some categories, the data comes with reference texts sourced from Wikipedia. Users might find bracketed Wikipedia citation numbers (like [42]) within the context field of the dataset. For smoother downstream applications, it's advisable to exclude these.
20
-
21
- Our finetuning leveraged the [MonsterAPI](https://monsterapi.ai)'s intuitive, no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
22
-
23
- This efficient process, surprisingly cost-effective,
24
- was completed in just 17.5 hours for 3 epochs, running on an A100 80GB GPU.
25
- Breaking it down further, each epoch took only 5.8 hours and cost a mere `$19.25`. The total cost for all 3 epochs came to `$57.75`.
26
-
27
- #### Hyperparameters & Run details:
28
- - Epochs: 3
29
- - Cost per epoch: $19.25
30
- - Total finetuning Cost: $57.75
31
- - Model Path: meta-llama/Llama-2-70b-hf
32
- - Dataset: databricks/databricks-dolly-15k
33
- - Learning rate: 0.0002
34
- - Number of epochs: 3
35
- - Data split: Training 90% / Validation 10%
36
- - Gradient accumulation steps: 4
 
 
 
 
 
 
 
 
 
37
 
38
- license: apache-2.0
39
  ---
40
 
41
- ######
42
 
43
- Prompt Used:
44
 
45
  ```
46
  ### INSTRUCTION:
@@ -57,4 +64,6 @@ Loss metrics
57
  Training loss (Blue) Validation Loss (orange):
58
  ![training loss](train-loss.png "Training loss")
59
 
 
60
 
 
 
11
  base_model: meta-llama/Llama-2-70b-hf
12
  ---
13
 
14
+ ### Finetuning Overview:
15
+
16
+ **Model Used:** meta-llama/Llama-2-70b-hf
17
+ **Dataset:** Databricks-dolly-15k
18
+
19
+ #### Dataset Insights:
20
+
21
+ The Databricks-dolly-15k dataset is an impressive compilation of over 15,000 records, made possible by the hard work and dedication of a multitude of Databricks professionals. It has been tailored to:
22
+
23
+ - Elevate the interactive capabilities of ChatGPT-like systems.
24
+ - Provide prompt/response pairs spanning eight distinct instruction categories, inclusive of the seven categories from the InstructGPT paper and an exploratory open-ended category.
25
+ - Ensure genuine and original content, largely offline-sourced with exceptions for Wikipedia in particular categories, and free from generative AI influences.
26
+
27
+ In an innovative approach, contributors had the opportunity to rephrase and answer queries from their peers, highlighting a focus on accuracy and clarity. Additionally, some data subsets feature Wikipedia-sourced reference texts, marked by bracketed citation numbers like [42]. For an optimal user experience in downstream applications, it's recommended to remove these.
28
+
29
+ #### Finetuning Details:
30
+
31
+ Using [MonsterAPI](https://monsterapi.ai)'s user-friendly [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), the finetuning:
32
+
33
+ - Stands out for its cost-effectiveness.
34
+ - Was executed in a total of 17.5 hours for 3 epochs with an A100 80GB GPU.
35
+ - Broke down to just 5.8 hours and `$19.25` per epoch, culminating in a combined cost of `$57.75` for all epochs.
36
+
37
+ #### Hyperparameters & Additional Details:
38
+
39
+ - **Epochs:** 3
40
+ - **Cost Per Epoch:** $19.25
41
+ - **Total Finetuning Cost:** $57.75
42
+ - **Model Path:** meta-llama/Llama-2-70b-hf
43
+ - **Learning Rate:** 0.0002
44
+ - **Data Split:** Training 90% / Validation 10%
45
+ - **Gradient Accumulation Steps:** 4
46
 
 
47
  ---
48
 
49
+ ### Prompt Structure:
50
 
 
51
 
52
  ```
53
  ### INSTRUCTION:
 
64
  Training loss (Blue) Validation Loss (orange):
65
  ![training loss](train-loss.png "Training loss")
66
 
67
+ ---
68
 
69
+ license: apache-2.0