juewang commited on
Commit
1bff121
1 Parent(s): 9caedd0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -6
README.md CHANGED
@@ -6,9 +6,14 @@ language:
6
 
7
  # GPT-NeoXT-Chat-Base-20B
8
 
9
- GPT-NeoXT-Chat-Base-20B is a 20B parameter open source chat model, fine-tuned from EleutherAI’s GPT-NeoX with over 40 million instructions on 100% carbon negative compute.
10
- It is part of OpenChatKit (codebase available [here](https://github.com/togethercomputer/OpenChaT))
11
- a community project that enables the open source AI contributors to improve the datasets available for training large language models and chatbots.
 
 
 
 
 
12
 
13
  ## Model Details
14
  - **Developed by**: \[TODO\] Together Computer, LAION, Ontocord, ...
@@ -21,6 +26,13 @@ a community project that enables the open source AI contributors to improve the
21
  ## Examples
22
  \[TODO\] sync with the blog post
23
 
 
 
 
 
 
 
 
24
  # Uses
25
  \[TODO\]
26
 
@@ -78,11 +90,9 @@ We therefore welcome contributions from individuals and organizations, and encou
78
  \[TODO\]
79
 
80
  **Training Procedure**
81
- \[TODO\]
82
 
83
- \[TODO\]
84
  - **Hardware:** 2 x 8 x A100 GPUs
85
- - **Optimizer:** AdamW
86
  - **Gradient Accumulations**: 2
87
  - **Batch:** 2 x 2 x 64 x 2048 = 524288 tokens
88
  - **Learning rate:** warmup to 1e-6 for 100 steps and then kept constant
 
6
 
7
  # GPT-NeoXT-Chat-Base-20B
8
 
9
+ > TLDR: As part of OpenChatKit (codebase available [here](https://github.com/togethercomputer/OpenChaT)),
10
+ > GPT-NeoXT-Chat-Base-20B is a 20B parameter language model, fine-tuned from EleutherAI’s GPT-NeoX with over 40 million instructions on 100% carbon negative compute.
11
+
12
+ We base GPT-NeoXT-Chat-Base-20B on ElutherAI’s GPT-NeoX model, and fine-tune it with data focusing on dialog-style interactions.
13
+ We focused the tuning on several tasks such as question answering, classification, extraction, and summarization.
14
+ We’ve fine-tuned the model with a collection of 43 million high-quality instructions.
15
+ Together partnered with LAION and Ontocord, who both helped curate the dataset the model is based on.
16
+ You can read more about this process and the availability of this dataset in LAION’s blog post [here](...).
17
 
18
  ## Model Details
19
  - **Developed by**: \[TODO\] Together Computer, LAION, Ontocord, ...
 
26
  ## Examples
27
  \[TODO\] sync with the blog post
28
 
29
+ ## Training Examples
30
+
31
+ The training data consists of pairs of human queries and corresponding bot responses, with human queries prefixed with <human>: and bot responses prefixed with <bot>:.
32
+ An example of the data format is as follows:
33
+
34
+
35
+
36
  # Uses
37
  \[TODO\]
38
 
 
90
  \[TODO\]
91
 
92
  **Training Procedure**
 
93
 
 
94
  - **Hardware:** 2 x 8 x A100 GPUs
95
+ - **Optimizer:** [8bit-AdamW](https://github.com/TimDettmers/bitsandbytes)
96
  - **Gradient Accumulations**: 2
97
  - **Batch:** 2 x 2 x 64 x 2048 = 524288 tokens
98
  - **Learning rate:** warmup to 1e-6 for 100 steps and then kept constant