Chanjun commited on
Commit
d59e92f
1 Parent(s): 4fd9117

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -12
README.md CHANGED
@@ -29,46 +29,53 @@ pipeline_tag: text-generation
29
  ```
30
  ### System:
31
  {System}
 
32
  ### User:
33
  {User}
 
34
  ### Assistant:
35
  {Assistant}
36
  ```
37
- ### Usage
 
 
38
  - Tested on A100 80GB
39
- - Our model can handle under 10k input tokens thanks to the `rope_scaling` option
40
 
41
  ```python
42
  import torch
43
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
 
44
  tokenizer = AutoTokenizer.from_pretrained("upstage/Llama-2-70b-instruct-v2")
45
  model = AutoModelForCausalLM.from_pretrained(
46
  "upstage/Llama-2-70b-instruct-v2",
47
- device_map='auto',
48
  torch_dtype=torch.float16,
49
  load_in_8bit=True,
50
- rope_scaling={'type': 'dynamic', 'factor': 2} # longer inputs possible
51
  )
52
- prompt = "### User:\nThomas is very healthy, but he has to go to the hospital every day. What could be the reasons?\n\n### Assistant:\n"
 
53
  inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
54
- del inputs['token_type_ids']
55
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
 
56
  output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
57
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
58
  ```
59
 
60
-
61
  ## Hardware and Software
62
 
63
  * **Hardware**: We utilized an A100x8 * 4 for training our model
64
- * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
65
 
66
  ## Evaluation Results
67
 
68
  ### Overview
69
  - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
70
- We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
71
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
 
72
 
73
  ### Main Results
74
  | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
@@ -82,7 +89,7 @@ We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-
82
  | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
83
  | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
84
 
85
- ### Scripts
86
  - Prepare evaluation environments:
87
  ```
88
  # clone the repository
@@ -96,9 +103,9 @@ cd lm-evaluation-harness
96
  ## Ethical Issues
97
 
98
  ### Ethical Considerations
99
- - There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.
100
 
101
  ## Contact Us
102
 
103
  ### Why Upstage LLM?
104
- - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. Our 70B model **outperforms all models around the world**, positioning itself as the leading performer. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► [click here to contact](https://www.upstage.ai/private-llm?utm_source=huggingface&utm_medium=link&utm_campaign=privatellm).
 
29
  ```
30
  ### System:
31
  {System}
32
+
33
  ### User:
34
  {User}
35
+
36
  ### Assistant:
37
  {Assistant}
38
  ```
39
+
40
+ ## Usage
41
+
42
  - Tested on A100 80GB
43
+ - Our model can handle up to 10k input tokens, thanks to the `rope_scaling` option
44
 
45
  ```python
46
  import torch
47
  from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
48
+
49
  tokenizer = AutoTokenizer.from_pretrained("upstage/Llama-2-70b-instruct-v2")
50
  model = AutoModelForCausalLM.from_pretrained(
51
  "upstage/Llama-2-70b-instruct-v2",
52
+ device_map="auto",
53
  torch_dtype=torch.float16,
54
  load_in_8bit=True,
55
+ rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs
56
  )
57
+
58
+ prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
59
  inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
60
+ del inputs["token_type_ids"]
61
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
62
+
63
  output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
64
  output_text = tokenizer.decode(output[0], skip_special_tokens=True)
65
  ```
66
 
 
67
  ## Hardware and Software
68
 
69
  * **Hardware**: We utilized an A100x8 * 4 for training our model
70
+ * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
71
 
72
  ## Evaluation Results
73
 
74
  ### Overview
75
  - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
76
+ We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`
77
  We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
78
+ - We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models
79
 
80
  ### Main Results
81
  | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
 
89
  | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
90
  | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
91
 
92
+ ### Scripts for H4 Score Reproduction
93
  - Prepare evaluation environments:
94
  ```
95
  # clone the repository
 
103
  ## Ethical Issues
104
 
105
  ### Ethical Considerations
106
+ - There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process
107
 
108
  ## Contact Us
109
 
110
  ### Why Upstage LLM?
111
+ - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. Our 70B model **outperforms all models around the world**, positioning itself as the leading performer. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► [click here to contact](https://www.upstage.ai/private-llm?utm_source=huggingface&utm_medium=link&utm_campaign=privatellm)