michaelfeil commited on
Commit
d525569
1 Parent(s): b9e4031

Upload togethercomputer/Pythia-Chat-Base-7B ctranslate fp16 weights

Browse files
Files changed (3) hide show
  1. README.md +222 -0
  2. model.bin +2 -2
  3. special_tokens_map.json +5 -0
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - ctranslate2
4
+ - int8
5
+ - float16
6
+
7
+ license: apache-2.0
8
+ language:
9
+ - en
10
+ ---
11
+ # # Fast-Inference with Ctranslate2
12
+ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
13
+
14
+ quantized version of [togethercomputer/Pythia-Chat-Base-7B](https://huggingface.co/togethercomputer/Pythia-Chat-Base-7B)
15
+ ```bash
16
+ pip install hf-hub-ctranslate2>=2.0.8
17
+ ```
18
+ Converted on 2023-05-22 using
19
+ ```
20
+ ct2-transformers-converter --model togethercomputer/Pythia-Chat-Base-7B --output_dir /home/michael/tmp-ct2fast-Pythia-Chat-Base-7B --force --copy_files tokenizer.json README.md tokenizer_config.json special_tokens_map.json .gitattributes --quantization float16
21
+ ```
22
+
23
+ Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
24
+ - `compute_type=int8_float16` for `device="cuda"`
25
+ - `compute_type=int8` for `device="cpu"`
26
+
27
+ ```python
28
+ from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
29
+ from transformers import AutoTokenizer
30
+
31
+ model_name = "michaelfeil/ct2fast-Pythia-Chat-Base-7B"
32
+ # use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
33
+ model = GeneratorCT2fromHfHub(
34
+ # load in int8 on CUDA
35
+ model_name_or_path=model_name,
36
+ device="cuda",
37
+ compute_type="int8_float16",
38
+ # tokenizer=AutoTokenizer.from_pretrained("togethercomputer/Pythia-Chat-Base-7B")
39
+ )
40
+ outputs = model.generate(
41
+ text=["def print_hello_world():", "def hello_name(name:"],
42
+ max_length=64
43
+ )
44
+ print(outputs)
45
+ ```
46
+
47
+ # Licence and other remarks:
48
+ This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
49
+
50
+ # Original description
51
+
52
+
53
+ ***<p style="font-size: 24px">Feel free to try out our [OpenChatKit feedback app](https://huggingface.co/spaces/togethercomputer/OpenChatKit)!</p>***
54
+
55
+ # Pythia-Chat-Base-7B-v0.16
56
+
57
+ > TLDR: As part of OpenChatKit (codebase available [here](https://github.com/togethercomputer/OpenChaT)),
58
+ > Pythia-Chat-Base-7B-v0.16 is a 7B parameter language model, fine-tuned from EleutherAI’s Pythia 7B with over 40 million instructions on 100% carbon negative compute.
59
+
60
+ Pythia-Chat-Base-7B-v0.16 is based on ElutherAI’s Pythia-7B model, and is fine-tuned with data focusing on dialog-style interactions.
61
+ We focused the tuning on several tasks such as question answering, classification, extraction, and summarization.
62
+ We’ve fine-tuned the model with a collection of 43 million high-quality instructions.
63
+ Together partnered with LAION and Ontocord.ai, who both helped curate the dataset the model is based on.
64
+ You can read more about this process and the availability of this dataset in LAION’s blog post [here](https://laion.ai/blog/oig-dataset/).
65
+
66
+ In addition to the aforementioned fine-tuning, Pythia-Chat-Base-7B-v0.16 has also undergone further fine-tuning via a small amount of feedback data.
67
+ This process allows the model to better adapt to human preferences in the conversations.
68
+
69
+ One of the notable features of Pythia-Chat-Base-7B-v0.16 is its ability to **run inference on a 12GB GPU**, thanks to the quantization technique.
70
+ It helps maintain the dialogue capabilities while making the model more accessible to a wider range of users and hardware configurations.
71
+
72
+ ## Model Details
73
+ - **Developed by**: Together Computer.
74
+ - **Model type**: Language Model
75
+ - **Language(s)**: English
76
+ - **License**: Apache 2.0
77
+ - **Model Description**: A 7B parameter open source chat model, fine-tuned from EleutherAI’s Pythia with over 40M instructions on 100% carbon negative compute
78
+ - **Resources for more information**: [GitHub Repository](https://github.com/togethercomputer/OpenChaT).
79
+
80
+ # Quick Start
81
+
82
+ ## GPU Inference
83
+
84
+ This requires a GPU with 24GB memory.
85
+ ```python
86
+ from transformers import AutoTokenizer, AutoModelForCausalLM
87
+
88
+ # init
89
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Pythia-Chat-Base-7B-v0.16")
90
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/Pythia-Chat-Base-7B-v0.16", torch_dtype=torch.float16)
91
+ model = model.to('cuda:0')
92
+
93
+ # infer
94
+ inputs = tokenizer("<human>: Hello!\n<bot>:", return_tensors='pt').to(model.device)
95
+ outputs = model.generate(**inputs, max_new_tokens=10, do_sample=True, temperature=0.8)
96
+ output_str = tokenizer.decode(outputs[0])
97
+ print(output_str)
98
+ ```
99
+
100
+ ## GPU Inference in Int8
101
+
102
+ This requires a GPU with 12GB memory.
103
+ ```python
104
+ from transformers import AutoTokenizer, AutoModelForCausalLM
105
+
106
+ # init
107
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Pythia-Chat-Base-7B-v0.16")
108
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/Pythia-Chat-Base-7B-v0.16", device_map="auto", load_in_8bit=True)
109
+
110
+ # infer
111
+ inputs = tokenizer("<human>: Hello!\n<bot>:", return_tensors='pt').to(model.device)
112
+ outputs = model.generate(**inputs, max_new_tokens=10, do_sample=True, temperature=0.8)
113
+ output_str = tokenizer.decode(outputs[0])
114
+ print(output_str)
115
+ ```
116
+
117
+
118
+ ## CPU Inference
119
+
120
+ ```python
121
+ from transformers import AutoTokenizer, AutoModelForCausalLM
122
+
123
+ # init
124
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Pythia-Chat-Base-7B-v0.16")
125
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/Pythia-Chat-Base-7B-v0.16", torch_dtype=torch.bfloat16)
126
+
127
+ # infer
128
+ inputs = tokenizer("<human>: Hello!\n<bot>:", return_tensors='pt').to(model.device)
129
+ outputs = model.generate(**inputs, max_new_tokens=10, do_sample=True, temperature=0.8)
130
+ output_str = tokenizer.decode(outputs[0])
131
+ print(output_str)
132
+ ```
133
+
134
+
135
+ ## Strengths of the model
136
+
137
+ There are several tasks that OpenChatKit excels at out of the box. This includes:
138
+
139
+ - Summarization and question answering within context.
140
+ - Extraction.
141
+ - Classification.
142
+
143
+ In addition, the model does well on few-shot prompts. For both classification and extraction, the model performs even better with few shots, as in most HELM tasks. [Contact us](https://www.together.xyz/contact) if you’re interested in trying few-shot prompts with the model.
144
+
145
+ ## Weaknesses of the model
146
+
147
+ That said, there are several areas where we have more work to do, and we need your help! Some of these include:
148
+
149
+ - Knowledge-based closed question and answering: The chatbot may hallucinate and give incorrect results. Be sure to fact check, and if possible provide feedback with the corrected information.
150
+ - Coding tasks: The chatbot was not trained on a large enough corpus of source code to excel at writing code. We welcome contributions of additional datasets to improve this!
151
+ - Repetition: Sometimes the chatbot will repeat its response. We’re working to improve this, but in the meantime you can click the refresh button to start a new conversation.
152
+ - Context switching: If you change the topic in the middle of a conversation the chatbot often cannot make the switch automatically and will continue to give answers related to the prior topic.
153
+ - Creative writing and longer answers: The chatbot does not generate long, creative text such as an essay or story.
154
+
155
+ We are excited to work with you to address these weaknesses by getting your feedback, bolstering data sets, and improving accuracy.
156
+
157
+ # Uses
158
+
159
+ ## Direct Use
160
+
161
+ The model is intended for research purposes. Possible research areas and tasks include
162
+
163
+ - Safe deployment of models which have the potential to generate harmful content.
164
+ - Probing and understanding the limitations and biases of dialogue models or language models.
165
+ - Generation of artworks and use in design and other artistic processes.
166
+ - Applications in educational or creative tools.
167
+ - Research on dialogue models or language models.
168
+
169
+ Excluded uses are described below.
170
+
171
+ ### Misuse, Malicious Use, and Out-of-Scope Use
172
+
173
+ The OpenChatKit community provides Pythia-Chat-Base-7B-v0.16 as an open source tool for building chatbots.
174
+ The community is not responsible for any misuse, malicious use, or out-of-scope use of the model.
175
+ It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
176
+
177
+ #### Out-of-Scope Use
178
+
179
+ Pythia-Chat-Base-7B-v0.16 is designed for use in chatbot applications and may not perform well for other use cases outside of its intended scope.
180
+ For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
181
+ It is important to consider the limitations of the model and to only use it for its intended purpose.
182
+
183
+ #### Misuse and Malicious Use
184
+
185
+ Pythia-Chat-Base-7B-v0.16 is designed for use in chatbot applications and should not be used for any other purpose.
186
+ Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the OpenChatKit community project.
187
+
188
+ Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
189
+
190
+ - Generating fake news, misinformation, or propaganda
191
+ - Promoting hate speech, discrimination, or violence against individuals or groups
192
+ - Impersonating individuals or organizations without their consent
193
+ - Engaging in cyberbullying or harassment
194
+ - Defamatory content
195
+ - Spamming or scamming
196
+ - Sharing confidential or sensitive information without proper authorization
197
+ - Violating the terms of use of the model or the data used to train it
198
+ - Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
199
+
200
+ ## Limitations
201
+
202
+ Pythia-Chat-Base-7B-v0.16, like other language model-based chatbots, has limitations that should be taken into consideration.
203
+ For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
204
+ We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
205
+
206
+ ## Training
207
+
208
+ **Training Data**
209
+
210
+ Please refer to [togethercomputer/OpenDataHub](https://github.com/togethercomputer/OpenDataHub)
211
+
212
+ **Training Procedure**
213
+
214
+ - **Hardware:** 8 x A100 GPUs
215
+ - **Optimizer:** [8bit-AdamW](https://github.com/TimDettmers/bitsandbytes)
216
+ - **Gradient Accumulations**: 4
217
+ - **Batch:** 4 x 4 x 16 x 2048 = 524288 tokens
218
+ - **Learning rate:** warmup to 1e-5 for 100 steps and then kept constant
219
+
220
+ ## Community
221
+
222
+ Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8b1bd56046d63059409d592568ab9c3603092bd6a28fdd2de77b3874ef8c778b
3
- size 6863965184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c16eb0b13fe2b2522855e5ef32152934ec3751c215e5e299c2b13b8a2ba41d16
3
+ size 13714425232
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }