Dacheng Li commited on
Commit
00b59f7
1 Parent(s): eeb5222

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ inference: false
4
+ ---
5
+
6
+ # fastchat-t5 Model Card
7
+
8
+ ## Model details
9
+
10
+ **Model type:**
11
+ fastchat-t5 is an open-source chatbot trained by fine-tuning Flan-t5-xl (3B parameters) on user-shared conversations collected from ShareGPT.
12
+ It is based on an encoder-decoder transformer architecture, and can autoregressively generate responses to users' inputs.
13
+
14
+ **Model date:**
15
+ fastchat-t5 was trained on April 2023.
16
+
17
+ **Organizations developing the model:**
18
+ The Vicuna team with members from UC Berkeley, CMU, Stanford, MBZUAI, and UC San Diego.
19
+
20
+ **Paper or resources for more information:**
21
+ https://vicuna.lmsys.org/
22
+
23
+ **License:**
24
+ Apache License 2.0
25
+
26
+ **Where to send questions or comments about the model:**
27
+ https://github.com/lm-sys/FastChat/issues
28
+
29
+ ## Intended use
30
+ **Primary intended uses:**
31
+ The primary use of fastchat-t5 is commercial usage on large language models and chatbots. It can also be used for research purposes.
32
+
33
+ **Primary intended users:**
34
+ The primary intended users of the model are entrepreneurs and researchers in natural language processing, machine learning, and artificial intelligence.
35
+
36
+ ## Training dataset
37
+ 70K conversations collected from ShareGPT.com.
38
+
39
+ ## Training details
40
+ It processes the ShareGPT data in the form of question answering. Each ChatGPT response is processed as an answer, and previous conversations betwen the user and the ChatGPT are processed as the question.
41
+ The encoder bi-directionally encodes a question into a hidden representation. The decoder uses cross-attention to attend to this representation, while generating a answer uni-directionally from a start token.
42
+ This model is fine-tuned for 3 epochs, with max learning rate 2e-5, warmup ratio 0.03, and the cosine learning rate schedule.
43
+
44
+ ## Evaluation dataset
45
+ A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.