yuchenglu commited on
Commit
fd00afd
•
1 Parent(s): c559461

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -7,17 +7,17 @@ datasets:
7
  - togethercomputer/llama-instruct
8
  ---
9
 
10
- # LLaMA-2-7B-32K-Chat
11
 
12
  ## Model Description
13
 
14
- LLaMA-2-7B-32K-Chat is an open-source, long-context chat model finetuned from [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality instruction and chat data.
15
- We built Llama-2-7B-32K-Chat with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
16
  We hope that this can enable everyone to finetune their own version of [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
17
 
18
  ## Data Collection Details
19
 
20
- LLaMA-2-7B-32K-Chat is fine-tuned over a combination of two parts:
21
  1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
22
  We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
23
  The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
@@ -36,8 +36,8 @@ Alternatively, you can load the model directly from the Hugging Face model hub u
36
  ```python
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
38
 
39
- tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat")
40
- model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat", trust_remote_code=True, torch_dtype=torch.float16)
41
  input_ids = tokenizer.encode(<your instruction>, return_tensors="pt")
42
  output = model.generate(input_ids, max_length=..., temperature=...)
43
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
@@ -90,25 +90,25 @@ We evaluate the model from three aspects: 1) [Normalized perplexity](https://tog
90
  | Model | 2K Seq | 4K Seq | 8K Seq | 16K Seq | 32K Seq |
91
  | -------- | ------- | ------- | ------- | ------- | ------- |
92
  | LLaMA-2-7B-Chat (Meta) | 1.844 | 1.833 | N/A | N/A | N/A |
93
- | LLaMA-2-7B-32K-Chat (ours) | 1.813 | 1.798 | 1.781 | 1.778 | 1.772|
94
 
95
  * Rouge Score over BookSum
96
  | Model | R1 | R2 | RL |
97
  | -------- | ------- | ------- | ------- |
98
  | LLaMA-2-7B-Chat (Meta) | 0.055 | 0.008 | 0.046 |
99
- | LLaMA-2-7B-32K-Chat (ours) | 0.365 | 0.086 | 0.192 |
100
 
101
  * Accuracy over MQA
102
  | Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
103
  | -------- | ------- | ------- | ------- |
104
  | LLaMA-2-7B-Chat (Meta) | 0.384 | 0.375 | 0.313 |
105
- | LLaMA-2-7B-32K-Chat (ours) | 0.451 | 0.434 | 0.373 |
106
 
107
- We observe that LLaMA-2-7B-32K-Chat obtains reasonable (and even better) perplexity, rouge score and accuracy over the original LLaMA-2-7B-Chat model.
108
 
109
  ## Limitations and Bias
110
 
111
- As with all language models, LLaMA-2-7B-32K-Chat may generate incorrect or biased content. It's important to keep this in mind when using the model.
112
 
113
  ## Community
114
 
 
7
  - togethercomputer/llama-instruct
8
  ---
9
 
10
+ # LLaMA-2-7B-32K-Instruct
11
 
12
  ## Model Description
13
 
14
+ LLaMA-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality instruction and chat data.
15
+ We built LLaMA-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/LLaMA-2-32K-Instruct).
16
  We hope that this can enable everyone to finetune their own version of [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
17
 
18
  ## Data Collection Details
19
 
20
+ LLaMA-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
21
  1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
22
  We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
23
  The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
 
36
  ```python
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
38
 
39
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Instruct")
40
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Instruct", trust_remote_code=True, torch_dtype=torch.float16)
41
  input_ids = tokenizer.encode(<your instruction>, return_tensors="pt")
42
  output = model.generate(input_ids, max_length=..., temperature=...)
43
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
 
90
  | Model | 2K Seq | 4K Seq | 8K Seq | 16K Seq | 32K Seq |
91
  | -------- | ------- | ------- | ------- | ------- | ------- |
92
  | LLaMA-2-7B-Chat (Meta) | 1.844 | 1.833 | N/A | N/A | N/A |
93
+ | LLaMA-2-7B-32K-Instruct (ours) | 1.813 | 1.798 | 1.781 | 1.778 | 1.772|
94
 
95
  * Rouge Score over BookSum
96
  | Model | R1 | R2 | RL |
97
  | -------- | ------- | ------- | ------- |
98
  | LLaMA-2-7B-Chat (Meta) | 0.055 | 0.008 | 0.046 |
99
+ | LLaMA-2-7B-32K-Instruct (ours) | 0.365 | 0.086 | 0.192 |
100
 
101
  * Accuracy over MQA
102
  | Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
103
  | -------- | ------- | ------- | ------- |
104
  | LLaMA-2-7B-Chat (Meta) | 0.384 | 0.375 | 0.313 |
105
+ | LLaMA-2-7B-32K-Instruct (ours) | 0.451 | 0.434 | 0.373 |
106
 
107
+ We observe that LLaMA-2-7B-32K-Instruct obtains reasonable (and even better) perplexity, rouge score and accuracy over the original LLaMA-2-7B-Chat model.
108
 
109
  ## Limitations and Bias
110
 
111
+ As with all language models, LLaMA-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.
112
 
113
  ## Community
114