sharpbai commited on
Commit
09af21e
1 Parent(s): 6f8f1b1

Upload /README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ ---
4
+
5
+ # vicuna-13b-v1.3
6
+
7
+ *The weight file is split into chunks with a size of 405M for convenient and fast parallel downloads*
8
+
9
+ A 405M split weight version of [lmsys/vicuna-13b-v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3)
10
+
11
+ The original model card is down below
12
+
13
+ -----------------------------------------
14
+
15
+ # Vicuna Model Card
16
+
17
+ ## Model Details
18
+
19
+ Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
20
+
21
+ - **Developed by:** [LMSYS](https://lmsys.org/)
22
+ - **Model type:** An auto-regressive language model based on the transformer architecture.
23
+ - **License:** Non-commercial license
24
+ - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
25
+
26
+ ### Model Sources
27
+
28
+ - **Repository:** https://github.com/lm-sys/FastChat
29
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
30
+ - **Paper:** https://arxiv.org/abs/2306.05685
31
+ - **Demo:** https://chat.lmsys.org/
32
+
33
+ ## Uses
34
+
35
+ The primary use of Vicuna is research on large language models and chatbots.
36
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
37
+
38
+ ## How to Get Started with the Model
39
+
40
+ Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
41
+ APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
42
+
43
+ ## Training Details
44
+
45
+ Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
46
+ The training data is around 140K conversations collected from ShareGPT.com.
47
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
48
+
49
+ ## Evaluation
50
+
51
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
52
+
53
+ ## Difference between different versions of Vicuna
54
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
55
+