LoneStriker commited on
Commit
4417737
1 Parent(s): e2c5704

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -1,35 +1,5 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ Starling-LM-7B-beta-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
2
+ Starling-LM-7B-beta-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
3
+ Starling-LM-7B-beta-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
4
+ Starling-LM-7B-beta-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
5
+ Starling-LM-7B-beta-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - berkeley-nest/Nectar
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ tags:
9
+ - reward model
10
+ - RLHF
11
+ - RLAIF
12
+ ---
13
+ # Starling-LM-7B-beta
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+ - **Developed by:** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao.
18
+ - **Model type:** Language Model finetuned with RLHF / RLAIF
19
+ - **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI
20
+ - **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
21
+
22
+
23
+
24
+ We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593).
25
+ Harnessing the power of our ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), our upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and our new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge. Stay tuned for our forthcoming code and paper, which will provide more details on the whole process.
26
+
27
+
28
+
29
+ For more detailed discussions, please check out our original [blog post](https://starling.cs.berkeley.edu), and stay tuned for our upcoming code and paper!
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Blog:** https://starling.cs.berkeley.edu/
33
+ - **Paper:** Coming soon!
34
+ - **Code:** Coming soon!
35
+
36
+
37
+ ## Uses
38
+
39
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
+
41
+ **Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
42
+
43
+ Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
44
+ In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
45
+
46
+ The conversation template is the same as Openchat-3.5-0106:
47
+ ```
48
+ import transformers
49
+ tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
50
+
51
+ # Single-turn
52
+ tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
53
+ assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
54
+
55
+ # Multi-turn
56
+ tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
57
+ assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
58
+
59
+ # Coding Mode
60
+ tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
61
+ assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
62
+ ```
63
+ ## Code Examples
64
+
65
+ ```python
66
+ import transformers
67
+
68
+ tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
69
+ model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
70
+
71
+ def generate_response(prompt):
72
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
73
+ outputs = model.generate(
74
+ input_ids,
75
+ max_length=256,
76
+ pad_token_id=tokenizer.pad_token_id,
77
+ eos_token_id=tokenizer.eos_token_id,
78
+ )
79
+ response_ids = outputs[0]
80
+ response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
81
+ return response_text
82
+
83
+ # Single-turn conversation
84
+ prompt = "Hello, how are you?"
85
+ single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
86
+ response_text = generate_response(single_turn_prompt)
87
+ print("Response:", response_text)
88
+
89
+ ## Multi-turn conversation
90
+ prompt = "Hello"
91
+ follow_up_question = "How are you today?"
92
+ response = ""
93
+ multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
94
+ response_text = generate_response(multi_turn_prompt)
95
+ print("Multi-turn conversation response:", response_text)
96
+
97
+ ### Coding conversation
98
+ prompt = "Implement quicksort using C++"
99
+ coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
100
+ response = generate_response(coding_prompt)
101
+ print("Coding conversation response:", response)
102
+ ```
103
+
104
+ ## License
105
+ The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
106
+
107
+
108
+ ## Acknowledgment
109
+ We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
110
+
111
+ ## Citation
112
+ ```
113
+ @misc{starling2023,
114
+ title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
115
+ url = {},
116
+ author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
117
+ month = {November},
118
+ year = {2023}
119
+ }
120
+ ```
Starling-LM-7B-beta-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b9b8805437f3c7812bc60d733dd4e63584beaa10568f66f11b3848b55116fba
3
+ size 3822035040
Starling-LM-7B-beta-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85ad53b9cab2e73d929425f19db65f2da5817745026ed0fc696e5ce5f8ed0303
3
+ size 4368450720
Starling-LM-7B-beta-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c67b033bff47e7b8574491c6c296c094e819488d146aca1c6326c10257450b99
3
+ size 5131421856
Starling-LM-7B-beta-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23129c2e9590d6ca142faf361bd509abac1b5295b91c6b1c85e93f2d90ab163f
3
+ size 5942078688
Starling-LM-7B-beta-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68919eea356c38c6c0e3ca8cddd6b5edd99aec186760c905440e486ec59e0e8a
3
+ size 7695875168