AndrewZeng
commited on
Commit
•
2d5447b
1
Parent(s):
cc9b1f6
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,44 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- hkust-nlp/deita-10k-v0
|
5 |
+
language:
|
6 |
+
- en
|
7 |
---
|
8 |
+
|
9 |
+
# Model Card for Deita Llama1 13B V1.0 SFT
|
10 |
+
|
11 |
+
Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs).
|
12 |
+
Deita Llama1 13B V1.0 SFT is a fine-tuned version of Llama 1 that was trained on 10k automatically selected lightweight, high-quality alignment SFT data: [Deita 10K V0](https://huggingface.co/datasets/hkust-nlp/deita-10k-v0).
|
13 |
+
|
14 |
+
## Model description
|
15 |
+
|
16 |
+
- **Model type:** Model fine tuned on automatically selected lightweight, high-quality alignment SFT data.
|
17 |
+
- **Language(s) (NLP):** Primarily English
|
18 |
+
- **Finetuned from model:** Llama-1-13b-hf
|
19 |
+
|
20 |
+
### Model Sources
|
21 |
+
|
22 |
+
- **Repository:** https://github.com/hkust-nlp/deita
|
23 |
+
- **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4).
|
24 |
+
|
25 |
+
## Performance
|
26 |
+
|
27 |
+
|
28 |
+
## Input Format
|
29 |
+
|
30 |
+
The model is trained using the [vicuna_v1.1 template](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py)
|
31 |
+
|
32 |
+
```
|
33 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hi!</s>USER: How are you? ASSISTANT:
|
34 |
+
```
|
35 |
+
|
36 |
+
### Training hyperparameters
|
37 |
+
|
38 |
+
The following hyperparameters were used during fine tuning:
|
39 |
+
- learning_rate: 2e-05
|
40 |
+
- total_train_batch_size: 128
|
41 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
42 |
+
- lr_scheduler_type: linear
|
43 |
+
- lr_scheduler_warmup_ratio: 0.1
|
44 |
+
- num_epochs: 3.0
|