Merge branch 'main' of https://huggingface.co/togethercomputer/Pythia-Chat-Base-7B-v0.16 into main
Browse files
README.md
ADDED
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
|
7 |
+
***<p style="font-size: 24px">Feel free to try out our [OpenChatKit feedback app](https://huggingface.co/spaces/togethercomputer/OpenChatKit)!</p>***
|
8 |
+
|
9 |
+
# Pythia-Chat-Base-7B-v0.16
|
10 |
+
|
11 |
+
> TLDR: As part of OpenChatKit (codebase available [here](https://github.com/togethercomputer/OpenChaT)),
|
12 |
+
> Pythia-Chat-Base-7B-v0.16 is a 7B parameter language model, fine-tuned from EleutherAI’s Pythia 7B with over 40 million instructions on 100% carbon negative compute.
|
13 |
+
|
14 |
+
Pythia-Chat-Base-7B-v0.16 is based on ElutherAI’s Pythia-7B model, and is fine-tuned with data focusing on dialog-style interactions.
|
15 |
+
We focused the tuning on several tasks such as question answering, classification, extraction, and summarization.
|
16 |
+
We’ve fine-tuned the model with a collection of 43 million high-quality instructions.
|
17 |
+
Together partnered with LAION and Ontocord.ai, who both helped curate the dataset the model is based on.
|
18 |
+
You can read more about this process and the availability of this dataset in LAION’s blog post [here](https://laion.ai/blog/oig-dataset/).
|
19 |
+
|
20 |
+
In addition to the aforementioned fine-tuning, Pythia-Chat-Base-7B-v0.16 has also undergone further fine-tuning via a small amount of feedback data.
|
21 |
+
This allows the model to better adapt to human preferences in the conversations.
|
22 |
+
|
23 |
+
## Model Details
|
24 |
+
- **Developed by**: Together Computer.
|
25 |
+
- **Model type**: Language Model
|
26 |
+
- **Language(s)**: English
|
27 |
+
- **License**: Apache 2.0
|
28 |
+
- **Model Description**: A 7B parameter open source chat model, fine-tuned from EleutherAI’s Pythia with over 40M instructions on 100% carbon negative compute
|
29 |
+
- **Resources for more information**: [GitHub Repository](https://github.com/togethercomputer/OpenChaT).
|
30 |
+
|
31 |
+
# Quick Start
|
32 |
+
|
33 |
+
```python
|
34 |
+
from transformers import pipeline
|
35 |
+
pipe = pipeline(model='togethercomputer/Pythia-Chat-Base-7B-v0.16')
|
36 |
+
pipe('''<human>: Hello!\n<bot>:''')
|
37 |
+
```
|
38 |
+
or
|
39 |
+
```python
|
40 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Pythia-Chat-Base-7B-v0.16")
|
42 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/Pythia-Chat-Base-7B-v0.16")
|
43 |
+
```
|
44 |
+
|
45 |
+
## Strengths of the model
|
46 |
+
|
47 |
+
There are several tasks that OpenChatKit excels at out of the box. This includes:
|
48 |
+
|
49 |
+
- Summarization and question answering within context.
|
50 |
+
- Extraction.
|
51 |
+
- Classification.
|
52 |
+
|
53 |
+
In addition, the model does well on few-shot prompts. For both classification and extraction, the model performs even better with few shots, as in most HELM tasks. [Contact us](https://www.together.xyz/contact) if you’re interested in trying few-shot prompts with the model.
|
54 |
+
|
55 |
+
## Weaknesses of the model
|
56 |
+
|
57 |
+
That said, there are several areas where we have more work to do, and we need your help! Some of these include:
|
58 |
+
|
59 |
+
- Knowledge-based closed question and answering: The chatbot may hallucinate and give incorrect results. Be sure to fact check, and if possible provide feedback with the corrected information.
|
60 |
+
- Coding tasks: The chatbot was not trained on a large enough corpus of source code to excel at writing code. We welcome contributions of additional datasets to improve this!
|
61 |
+
- Repetition: Sometimes the chatbot will repeat its response. We’re working to improve this, but in the meantime you can click the refresh button to start a new conversation.
|
62 |
+
- Context switching: If you change the topic in the middle of a conversation the chatbot often cannot make the switch automatically and will continue to give answers related to the prior topic.
|
63 |
+
- Creative writing and longer answers: The chatbot does not generate long, creative text such as an essay or story.
|
64 |
+
|
65 |
+
We are excited to work with you to address these weaknesses by getting your feedback, bolstering data sets, and improving accuracy.
|
66 |
+
|
67 |
+
# Uses
|
68 |
+
|
69 |
+
## Direct Use
|
70 |
+
|
71 |
+
The model is intended for research purposes. Possible research areas and tasks include
|
72 |
+
|
73 |
+
- Safe deployment of models which have the potential to generate harmful content.
|
74 |
+
- Probing and understanding the limitations and biases of dialogue models or language models.
|
75 |
+
- Generation of artworks and use in design and other artistic processes.
|
76 |
+
- Applications in educational or creative tools.
|
77 |
+
- Research on dialogue models or language models.
|
78 |
+
|
79 |
+
Excluded uses are described below.
|
80 |
+
|
81 |
+
### Misuse, Malicious Use, and Out-of-Scope Use
|
82 |
+
|
83 |
+
The OpenChatKit community provides Pythia-Chat-Base-7B-v0.16 as an open source tool for building chatbots.
|
84 |
+
The community is not responsible for any misuse, malicious use, or out-of-scope use of the model.
|
85 |
+
It is the responsibility of the end user to ensure that the model is used in a responsible and ethical manner.
|
86 |
+
|
87 |
+
#### Out-of-Scope Use
|
88 |
+
|
89 |
+
Pythia-Chat-Base-7B-v0.16 is designed for use in chatbot applications and may not perform well for other use cases outside of its intended scope.
|
90 |
+
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
|
91 |
+
It is important to consider the limitations of the model and to only use it for its intended purpose.
|
92 |
+
|
93 |
+
#### Misuse and Malicious Use
|
94 |
+
|
95 |
+
Pythia-Chat-Base-7B-v0.16 is designed for use in chatbot applications and should not be used for any other purpose.
|
96 |
+
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the OpenChatKit community project.
|
97 |
+
|
98 |
+
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
|
99 |
+
|
100 |
+
- Generating fake news, misinformation, or propaganda
|
101 |
+
- Promoting hate speech, discrimination, or violence against individuals or groups
|
102 |
+
- Impersonating individuals or organizations without their consent
|
103 |
+
- Engaging in cyberbullying or harassment
|
104 |
+
- Defamatory content
|
105 |
+
- Spamming or scamming
|
106 |
+
- Sharing confidential or sensitive information without proper authorization
|
107 |
+
- Violating the terms of use of the model or the data used to train it
|
108 |
+
- Creating automated bots for malicious purposes such as spreading malware, phishing scams, or spamming
|
109 |
+
|
110 |
+
## Limitations
|
111 |
+
|
112 |
+
Pythia-Chat-Base-7B-v0.16, like other language model-based chatbots, has limitations that should be taken into consideration.
|
113 |
+
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
|
114 |
+
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
|
115 |
+
|
116 |
+
## Training
|
117 |
+
|
118 |
+
**Training Data**
|
119 |
+
|
120 |
+
Please refer to [togethercomputer/OpenDataHub](https://github.com/togethercomputer/OpenDataHub)
|
121 |
+
|
122 |
+
**Training Procedure**
|
123 |
+
|
124 |
+
- **Hardware:** 8 x A100 GPUs
|
125 |
+
- **Optimizer:** [8bit-AdamW](https://github.com/TimDettmers/bitsandbytes)
|
126 |
+
- **Gradient Accumulations**: 4
|
127 |
+
- **Batch:** 4 x 4 x 16 x 2048 = 524288 tokens
|
128 |
+
- **Learning rate:** warmup to 1e-5 for 100 steps and then kept constant
|
129 |
+
|
130 |
+
## Community
|
131 |
+
|
132 |
+
Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
|