fakezeta commited on
Commit
a13a5c2
1 Parent(s): ebdd936

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md CHANGED
@@ -1,3 +1,114 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - berkeley-nest/Nectar
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ tags:
9
+ - reward model
10
+ - RLHF
11
+ - RLAIF
12
  ---
13
+ # OpenVINO IR model with int4 quantization of Starling-LM-7B-beta
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ - **Developed by: The Nexusflow Team (** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
20
+ - **Model type:** Language Model finetuned with RLHF / RLAIF
21
+ - **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI
22
+ - **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
23
+
24
+
25
+
26
+ We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593).
27
+ Harnessing the power of the ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
28
+
29
+
30
+ ## Uses
31
+
32
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
33
+
34
+ **Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
35
+
36
+ Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
37
+ In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
38
+
39
+ The conversation template is the same as Openchat-3.5-0106:
40
+ ```
41
+ import transformers
42
+ tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
43
+
44
+ # Single-turn
45
+ tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
46
+ assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
47
+
48
+ # Multi-turn
49
+ tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
50
+ assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
51
+
52
+ # Coding Mode
53
+ tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
54
+ assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
55
+ ```
56
+ ## Code Examples
57
+
58
+ ```python
59
+ import transformers
60
+ from optimum.intel.openvino import OVModelForCausalLM
61
+
62
+ tokenizer = transformers.AutoTokenizer.from_pretrained("fakezeta/Starling-LM-7B-beta-openvino-int4")
63
+ model = OVModelForCausalLM.from_pretrained("fakezeta/Starling-LM-7B-beta-openvino-int4")
64
+
65
+ def generate_response(prompt):
66
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
67
+ outputs = model.generate(
68
+ input_ids,
69
+ max_length=256,
70
+ pad_token_id=tokenizer.pad_token_id,
71
+ eos_token_id=tokenizer.eos_token_id,
72
+ )
73
+ response_ids = outputs[0]
74
+ response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
75
+ return response_text
76
+
77
+ # Single-turn conversation
78
+ prompt = "Hello, how are you?"
79
+ single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
80
+ response_text = generate_response(single_turn_prompt)
81
+ print("Response:", response_text)
82
+
83
+ ## Multi-turn conversation
84
+ prompt = "Hello"
85
+ follow_up_question = "How are you today?"
86
+ response = ""
87
+ multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
88
+ response_text = generate_response(multi_turn_prompt)
89
+ print("Multi-turn conversation response:", response_text)
90
+
91
+ ### Coding conversation
92
+ prompt = "Implement quicksort using C++"
93
+ coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
94
+ response = generate_response(coding_prompt)
95
+ print("Coding conversation response:", response)
96
+ ```
97
+
98
+ ## License
99
+ The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
100
+
101
+
102
+ ## Acknowledgment
103
+ We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
104
+
105
+ ## Citation
106
+ ```
107
+ @misc{starling2023,
108
+ title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
109
+ url = {},
110
+ author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
111
+ month = {November},
112
+ year = {2023}
113
+ }
114
+ ```