Update README.md
Browse files
README.md
CHANGED
@@ -15,10 +15,10 @@ It is based on an encoder-decoder transformer architecture, and can autoregressi
|
|
15 |
FastChat-T5 was trained on April 2023.
|
16 |
|
17 |
**Organizations developing the model:**
|
18 |
-
The
|
19 |
|
20 |
**Paper or resources for more information:**
|
21 |
-
https://
|
22 |
|
23 |
**License:**
|
24 |
Apache License 2.0
|
@@ -28,7 +28,7 @@ https://github.com/lm-sys/FastChat/issues
|
|
28 |
|
29 |
## Intended use
|
30 |
**Primary intended uses:**
|
31 |
-
The primary use of FastChat-T5 is commercial usage
|
32 |
|
33 |
**Primary intended users:**
|
34 |
The primary intended users of the model are entrepreneurs and researchers in natural language processing, machine learning, and artificial intelligence.
|
@@ -37,9 +37,9 @@ The primary intended users of the model are entrepreneurs and researchers in nat
|
|
37 |
70K conversations collected from ShareGPT.com.
|
38 |
|
39 |
## Training details
|
40 |
-
It processes the ShareGPT data in the form of question answering. Each ChatGPT response is processed as an answer, and previous conversations
|
41 |
-
The encoder bi-directionally encodes a question into a hidden representation. The decoder uses cross-attention to attend to this representation
|
42 |
-
This model is fine-tuned for 3 epochs, with max learning rate 2e-5, warmup ratio 0.03, and
|
43 |
|
44 |
## Evaluation dataset
|
45 |
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
|
|
|
15 |
FastChat-T5 was trained on April 2023.
|
16 |
|
17 |
**Organizations developing the model:**
|
18 |
+
The FastChat developers.
|
19 |
|
20 |
**Paper or resources for more information:**
|
21 |
+
https://github.com/lm-sys/FastChat#FastChat-T5
|
22 |
|
23 |
**License:**
|
24 |
Apache License 2.0
|
|
|
28 |
|
29 |
## Intended use
|
30 |
**Primary intended uses:**
|
31 |
+
The primary use of FastChat-T5 is the commercial usage of large language models and chatbots. It can also be used for research purposes.
|
32 |
|
33 |
**Primary intended users:**
|
34 |
The primary intended users of the model are entrepreneurs and researchers in natural language processing, machine learning, and artificial intelligence.
|
|
|
37 |
70K conversations collected from ShareGPT.com.
|
38 |
|
39 |
## Training details
|
40 |
+
It processes the ShareGPT data in the form of question answering. Each ChatGPT response is processed as an answer, and previous conversations between the user and the ChatGPT are processed as the question.
|
41 |
+
The encoder bi-directionally encodes a question into a hidden representation. The decoder uses cross-attention to attend to this representation while generating an answer uni-directionally from a start token.
|
42 |
+
This model is fine-tuned for 3 epochs, with a max learning rate 2e-5, warmup ratio 0.03, and a cosine learning rate schedule.
|
43 |
|
44 |
## Evaluation dataset
|
45 |
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.
|