File size: 1,914 Bytes
39a78e2
 
93fbcbd
 
 
 
338e42d
39a78e2
c5c0da9
64ecadb
3ad7b95
c5c0da9
 
93fbcbd
 
c5c0da9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93fbcbd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: apache-2.0
datasets:
- hkust-nlp/deita-10k-v0
language:
- en
base_model: meta-llama/Llama-2-13b-hf
---

<img src="https://huggingface.co/datasets/allenai/blog-images/blob/main/tulu-v2/Tulu%20V2%20banner.png" alt="Deita banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>

# Model Card for Deita Llama2 13B V1.0 SFT

Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs). 
Deita Llama2 13B V1.0 SFT is a fine-tuned version of Llama 2 that was trained on 10k automatically selected lightweight, high-quality alignment SFT data: [Deita 10K V0](https://huggingface.co/datasets/hkust-nlp/deita-10k-v0).

## Model description

- **Model type:** Model fine tuned on automatically selected lightweight, high-quality alignment SFT data.
- **Language(s) (NLP):** Primarily English
- **Finetuned from model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)


### Model Sources

- **Repository:** https://github.com/hkust-nlp/deita
- **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4).

## Performance


## Input Format

The model is trained using the [vicuna_v1.1 template](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py)

```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hi!</s>USER: How are you? ASSISTANT:
```

### Training hyperparameters

The following hyperparameters were used during fine tuning:
- learning_rate: 2e-05
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0