AjayMukundS
commited on
Commit
•
eb89191
1
Parent(s):
fc46b6c
Update README.md
Browse files
README.md
CHANGED
@@ -28,6 +28,7 @@ This is a Llama 2 Fine Tuned Model with 7 Billion Parameters on the Dataset from
|
|
28 |
In the case of Llama 2, the following Chat Template is used for the chat models:
|
29 |
|
30 |
**[INST] SYSTEM PROMPT**
|
|
|
31 |
**User Prompt [/INST] Model Answer**
|
32 |
|
33 |
System Prompt (optional) --> to guide the model
|
@@ -38,8 +39,11 @@ Model Answer (required)
|
|
38 |
|
39 |
## Training Data
|
40 |
The Instruction Dataset is reformated to follow the above Llama 2 template.
|
41 |
-
|
|
|
|
|
42 |
**Reformated Dataset with 1K Samples** --> https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k
|
|
|
43 |
**Complete Reformated Datset** --> https://huggingface.co/datasets/mlabonne/guanaco-llama2
|
44 |
|
45 |
To drastically reduce the VRAM usage, we must fine-tune the model in 4-bit precision, which is why we’ll use QLoRA here and the GPU on which the model was fined tuned on was **L4 (Google Colab Pro)**
|
|
|
28 |
In the case of Llama 2, the following Chat Template is used for the chat models:
|
29 |
|
30 |
**[INST] SYSTEM PROMPT**
|
31 |
+
|
32 |
**User Prompt [/INST] Model Answer**
|
33 |
|
34 |
System Prompt (optional) --> to guide the model
|
|
|
39 |
|
40 |
## Training Data
|
41 |
The Instruction Dataset is reformated to follow the above Llama 2 template.
|
42 |
+
|
43 |
+
**Original Dataset** --> https://huggingface.co/datasets/timdettmers/openassistant-guanaco\
|
44 |
+
|
45 |
**Reformated Dataset with 1K Samples** --> https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k
|
46 |
+
|
47 |
**Complete Reformated Datset** --> https://huggingface.co/datasets/mlabonne/guanaco-llama2
|
48 |
|
49 |
To drastically reduce the VRAM usage, we must fine-tune the model in 4-bit precision, which is why we’ll use QLoRA here and the GPU on which the model was fined tuned on was **L4 (Google Colab Pro)**
|