juewang commited on
Commit
75c4c62
1 Parent(s): 705fc73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -20
README.md CHANGED
@@ -9,43 +9,116 @@ language:
9
  > TLDR: As part of OpenChatKit (codebase available [here](https://github.com/togethercomputer/OpenChaT)),
10
  > GPT-NeoXT-Chat-Base-20B is a 20B parameter language model, fine-tuned from EleutherAI’s GPT-NeoX with over 40 million instructions on 100% carbon negative compute.
11
 
12
- We base GPT-NeoXT-Chat-Base-20B on ElutherAI’s GPT-NeoX model, and fine-tune it with data focusing on dialog-style interactions.
13
  We focused the tuning on several tasks such as question answering, classification, extraction, and summarization.
14
  We’ve fine-tuned the model with a collection of 43 million high-quality instructions.
15
- Together partnered with LAION and Ontocord, who both helped curate the dataset the model is based on.
16
  You can read more about this process and the availability of this dataset in LAION’s blog post [here](...).
17
 
18
  ## Model Details
19
- - **Developed by**: \[TODO\] Together Computer, LAION, Ontocord, ...
20
  - **Model type**: Language Model
21
  - **Language(s)**: English
22
  - **License**: Apache 2.0
23
  - **Model Description**: A 20B parameter open source chat model, fine-tuned from EleutherAI’s NeoX with over 40M instructions on 100% carbon negative compute
24
  - **Resources for more information**: [GitHub Repository](https://github.com/togethercomputer/OpenChaT).
25
 
26
- ## Examples
27
- \[TODO\] sync with the blog post
28
-
29
- ## Training Examples
30
-
31
- The training data consists of pairs of human queries and corresponding bot responses, with human queries prefixed with <human>: and bot responses prefixed with <bot>:.
32
- An example of the data format is as follows:
33
-
34
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  # Uses
37
- \[TODO\]
38
 
39
  ## Direct Use
40
- \[TODO\]
41
 
42
- The model is intended for research purposes only. Possible research areas and tasks include
43
 
44
  - Safe deployment of models which have the potential to generate harmful content.
45
- - Probing and understanding the limitations and biases of generative models.
46
  - Generation of artworks and use in design and other artistic processes.
47
  - Applications in educational or creative tools.
48
- - Research on generative models.
49
 
50
  Excluded uses are described below.
51
 
@@ -97,9 +170,6 @@ We therefore welcome contributions from individuals and organizations, and encou
97
  - **Batch:** 2 x 2 x 64 x 2048 = 524288 tokens
98
  - **Learning rate:** warmup to 1e-6 for 100 steps and then kept constant
99
 
100
- ## Evaluation Results
101
- \[TODO\]
102
-
103
  ## Environmental Impact
104
  \[TODO\]
105
  **Stable Diffusion v1** **Estimated Emissions**
 
9
  > TLDR: As part of OpenChatKit (codebase available [here](https://github.com/togethercomputer/OpenChaT)),
10
  > GPT-NeoXT-Chat-Base-20B is a 20B parameter language model, fine-tuned from EleutherAI’s GPT-NeoX with over 40 million instructions on 100% carbon negative compute.
11
 
12
+ GPT-NeoXT-Chat-Base-20B is based on ElutherAI’s GPT-NeoX model, and is fine-tuned with data focusing on dialog-style interactions.
13
  We focused the tuning on several tasks such as question answering, classification, extraction, and summarization.
14
  We’ve fine-tuned the model with a collection of 43 million high-quality instructions.
15
+ Together partnered with LAION and Ontocord, who both helped curate the dataset the model is based on.
16
  You can read more about this process and the availability of this dataset in LAION’s blog post [here](...).
17
 
18
  ## Model Details
19
+ - **Developed by**: Together Computer, LAION, Ontocord.
20
  - **Model type**: Language Model
21
  - **Language(s)**: English
22
  - **License**: Apache 2.0
23
  - **Model Description**: A 20B parameter open source chat model, fine-tuned from EleutherAI’s NeoX with over 40M instructions on 100% carbon negative compute
24
  - **Resources for more information**: [GitHub Repository](https://github.com/togethercomputer/OpenChaT).
25
 
26
+ ## Strengths of the model
27
+
28
+ There are several tasks that OpenChatKit excels at out of the box. This includes:
29
+
30
+ - Example 1: Summarization and question answering within context.
31
+
32
+ ```markdown
33
+ **Summarize a long document into a single sentence and conduct question answering related to the document, with multiple rounds**
34
+
35
+ <user>: Last year, the travel industry saw a big rebound in demand — and that demand is showing no signs of slowing down this spring break travel season. Planes and hotels will be full, travelers will likely face long queues, cancellations, massive crowds and plenty of other travel nightmares. But perhaps the most frustrating thing you’ll have to prepare for this spring break is if your luggage goes missing. You’ve got to somehow track down your belongings that were in the airline’s care and then wait to be reunited — all while trying to enjoy the start of your long-awaited vacation. As far as travel nightmares go, lost luggage is up there as one of the worst.
36
+
37
+ To be honest up front, I am great at strategically losing things — or rather, losing stuff with style. (I hope you appreciate the “Toy Story” reference.) So when a product comes along that’s a perfect way to keep track of my belongings — especially when traveling — I’m adding it to my travel bag.
38
+
39
+ I’ve been a big fan of Apple’s $29 item tracker, dubbed AirTag. The long-rumored AirTag product finally launched in 2021, and while it’s not the first item tracker to exist, it has two key features that set it apart from the rest to make it the ultimate travel companion: access to the Find My network and Precision Finding technology.
40
+
41
+ The Federal Aviation Administration has even ruled that having an AirTag in your luggage is permitted and, in fact, not a safety hazard. The devices only contain 0.1 grams of lithium, which falls below the FAA’s limit.
42
+
43
+ “Luggage tracking devices powered by lithium metal cells that have 0.3 grams or less of lithium can be used on checked baggage,” the FAA said in a statement. “Apple AirTags meet this threshold; other luggage tracking devices may not.”
44
+
45
+ But, you may be wondering what makes them so special and why you should carry one in your checked luggage this spring break? Let’s explain why.
46
+
47
+ Summarize the above into a single paragraph.
48
+
49
+ <bot>: If you're traveling this spring break, you might want to consider investing in an item tracker to help you find your lost luggage. Apple's AirTag is the best option, according to a new report.
50
+
51
+ <user>: But isn't it unsafe to put AirTag in a luggage?
52
+
53
+ <bot>: The Federal Aviation Administration has ruled that having an AirTag in your luggage is permitted and, in fact, not a safety hazard.
54
+
55
+ <user>: Ah, why is that?
56
+
57
+ <bot>: The devices only contain 0.1 grams of lithium, which falls below the FAA’s limit.
58
+ ```
59
+
60
+ - Example 2: Extraction.
61
+
62
+ ```markdown
63
+ **Extract structured information from unstructured documents, as illustrated in the following example.**
64
+
65
+ <user>: Generate a table summarizing the options outlined in this email.
66
+
67
+ Team, we have to make a decision on the product plan for Goocrux v2.
68
+
69
+ The first option will take us 3 weeks of development, and will achieve performance of 80ms, but means that the UI has to change.
70
+
71
+ The second option will take us 4 weeks of development, and will achieve performance of 60ms, without any UI change.
72
+
73
+ The third option will take the longest -- 6 weeks of development, will achieve performance of 30ms, without any UI change.
74
+
75
+ A markdown table with four columns: (1) Option ID, (2) Development time, (3) Performance, (4) UI Change:
76
+
77
+ <bot>: #.
78
+ | ID | Option ID | Development time | Performance | UI Change |
79
+ | --------- | --------- | --------- | --------- | --------- |
80
+ | 1 | 1 | 3 weeks | 80ms | Yes |
81
+ | 2 | 2 | 4 weeks | 60ms | No |
82
+ | 3 | 3 | 6 weeks | 30ms | No |
83
+ ```
84
+
85
+ - Example 3: Classification.
86
+
87
+ ```markdown
88
+ **Classifying a sentence or paragraph into different categories.**
89
+
90
+ <user>: Classify the sentiment of the following sentence into Positive, Neutral, or Negative: Super excited about teaching Stanford’s first course on Large Language Models! Check the syllabus out here
91
+ <bot>: Positive
92
+
93
+ <user>: Classify the sentiment of the following sentence into Positive, Neutral, or Negative: How about the following sentence: It is raining outside and I feel so blue
94
+ <bot>: Negative
95
+ ```
96
+
97
+ In addition, the model does well on few-shot prompts. For both classification and extraction, the model performs even better with few shots, as in most HELM tasks. [Contact us](https://www.together.xyz/contact) if you’re interested in trying few-shot prompts with the model.
98
+
99
+ ## Weaknesses of the model
100
+
101
+ That said, there are several areas where we have more work to do, and we need your help! Some of these include:
102
+
103
+ - Knowledge-based closed question and answering: The chatbot may hallucinate and give incorrect results. Be sure to fact check, and if possible provide feedback with the corrected information.
104
+ - Coding tasks: The chatbot was not trained on a large enough corpus of source code to excel at writing code. We welcome contributions of additional datasets to improve this!
105
+ - Repetition: Sometimes the chatbot will repeat its response. We’re working to improve this, but in the meantime you can click the refresh button to start a new conversation.
106
+ - Context switching: If you change the topic in the middle of a conversation the chatbot often cannot make the switch automatically and will continue to give answers related to the prior topic.
107
+ - Creative writing and longer answers: The chatbot does not generate long, creative text such as an essay or story.
108
+
109
+ We are excited to work with you to address these weaknesses by getting your feedback, bolstering data sets, and improving accuracy.
110
 
111
  # Uses
 
112
 
113
  ## Direct Use
 
114
 
115
+ The model is intended for research purposes. Possible research areas and tasks include
116
 
117
  - Safe deployment of models which have the potential to generate harmful content.
118
+ - Probing and understanding the limitations and biases of dialogue models or language models.
119
  - Generation of artworks and use in design and other artistic processes.
120
  - Applications in educational or creative tools.
121
+ - Research on dialogue models or language models.
122
 
123
  Excluded uses are described below.
124
 
 
170
  - **Batch:** 2 x 2 x 64 x 2048 = 524288 tokens
171
  - **Learning rate:** warmup to 1e-6 for 100 steps and then kept constant
172
 
 
 
 
173
  ## Environmental Impact
174
  \[TODO\]
175
  **Stable Diffusion v1** **Estimated Emissions**