Update README.md
Browse files
README.md
CHANGED
@@ -5,11 +5,14 @@ tags:
|
|
5 |
- sft
|
6 |
- generated_from_trainer
|
7 |
datasets:
|
8 |
-
-
|
9 |
base_model: NousResearch/Llama-2-7b-chat-hf
|
10 |
model-index:
|
11 |
- name: llama2-7-dolly-query
|
12 |
results: []
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -18,21 +21,24 @@ should probably proofread and complete it, then remove this comment. -->
|
|
18 |
# llama2-7-dolly-query
|
19 |
|
20 |
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the generator dataset.
|
|
|
21 |
|
22 |
## Model description
|
23 |
|
24 |
-
|
25 |
|
26 |
## Intended uses & limitations
|
27 |
|
28 |
-
|
29 |
|
30 |
## Training and evaluation data
|
31 |
|
32 |
-
|
33 |
|
34 |
## Training procedure
|
35 |
|
|
|
|
|
36 |
### Training hyperparameters
|
37 |
|
38 |
The following hyperparameters were used during training:
|
@@ -49,7 +55,7 @@ The following hyperparameters were used during training:
|
|
49 |
|
50 |
### Training results
|
51 |
|
52 |
-
|
53 |
|
54 |
### Framework versions
|
55 |
|
|
|
5 |
- sft
|
6 |
- generated_from_trainer
|
7 |
datasets:
|
8 |
+
- databricks/databricks-dolly-15k
|
9 |
base_model: NousResearch/Llama-2-7b-chat-hf
|
10 |
model-index:
|
11 |
- name: llama2-7-dolly-query
|
12 |
results: []
|
13 |
+
license: mit
|
14 |
+
language:
|
15 |
+
- en
|
16 |
---
|
17 |
|
18 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
21 |
# llama2-7-dolly-query
|
22 |
|
23 |
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the generator dataset.
|
24 |
+
Can be used in conjunction with [LukeOLuck/llama2-7-dolly-answer](https://huggingface.co/LukeOLuck/llama2-7-dolly-answer)
|
25 |
|
26 |
## Model description
|
27 |
|
28 |
+
A Fine-Tuned PEFT Adapter for the llama2 7b chat model
|
29 |
|
30 |
## Intended uses & limitations
|
31 |
|
32 |
+
Generate a query based on context and input
|
33 |
|
34 |
## Training and evaluation data
|
35 |
|
36 |
+
Used SFTTrainer, [checkout the code](https://colab.research.google.com/drive/1sr0mUF8dwYKo6NNR3tkjk0Z-p5FFr1_6?usp=sharing)
|
37 |
|
38 |
## Training procedure
|
39 |
|
40 |
+
[Checkout the code here](https://colab.research.google.com/drive/1sr0mUF8dwYKo6NNR3tkjk0Z-p5FFr1_6?usp=sharing)
|
41 |
+
|
42 |
### Training hyperparameters
|
43 |
|
44 |
The following hyperparameters were used during training:
|
|
|
55 |
|
56 |
### Training results
|
57 |
|
58 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65388a56a5ab055cf2d73676/FJ5p_wutu8o1z789Hd93g.png)
|
59 |
|
60 |
### Framework versions
|
61 |
|