DachengLi commited on
Commit
b839f54
1 Parent(s): 7fc6748

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ ---
4
+
5
+ # longchat-7b-16k Model Card
6
+
7
+ ## Model details
8
+
9
+ **Model type:**
10
+ longchat-7b-16k is an open-source chatbot trained by fine-tuning llama-7b on user-shared conversations collected from ShareGPT, using the condensing rotary embedding technique reported in the [blog](https://lmsys.org/blog/2023-06-29-longchat).
11
+
12
+ **Model date:**
13
+ longchat-7b-16k was trained on June 2023.
14
+
15
+ **Organizations developing the model:**
16
+ The LongChat developers: Dacheng Li*, Rulin Shao*, Anze Xie, Ying Sheng, Lianmin Zheng, Ion Stoica, Xuezhe Ma, and Hao Zhang
17
+
18
+ **Paper or resources for more information:**
19
+ https://github.com/DachengLi1/LongChat
20
+
21
+ **Where to send questions or comments about the model:**
22
+ https://github.com/DachengLi1/LongChat
23
+
24
+ ## Intended use
25
+ **Primary intended uses:**
26
+ The primary use of longchat-7b-16k is for research purposes.
27
+
28
+ **Primary intended users:**
29
+ The primary intended users of the model are researchers in natural language processing, machine learning, and artificial intelligence.
30
+
31
+ ## Training dataset
32
+ 80K conversations collected from ShareGPT.com.
33
+
34
+ ## Evaluation dataset
35
+ A preliminary evaluation of the model quality is conducted by our released [LongEval](https://github.com/DachengLi1/LongChat).