Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,13 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
inference: false
|
4 |
---
|
5 |
|
6 |
**NOTE: New version available**
|
7 |
-
Please check out a newer version of the weights
|
8 |
-
If you still want to use this old version, please see the compatibility and difference between different versions
|
9 |
|
10 |
**NOTE: This "delta model" cannot be used directly.**
|
11 |
-
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights.
|
12 |
-
See https://github.com/lm-sys/FastChat#vicuna-weights for instructions.
|
13 |
|
14 |
<br>
|
15 |
<br>
|
@@ -29,14 +27,12 @@ Vicuna was trained between March 2023 and April 2023.
|
|
29 |
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
|
30 |
|
31 |
**Paper or resources for more information:**
|
32 |
-
https://
|
33 |
-
|
34 |
-
**License:**
|
35 |
-
Apache License 2.0
|
36 |
|
37 |
**Where to send questions or comments about the model:**
|
38 |
https://github.com/lm-sys/FastChat/issues
|
39 |
|
|
|
40 |
## Intended use
|
41 |
**Primary intended uses:**
|
42 |
The primary use of Vicuna is research on large language models and chatbots.
|
@@ -48,4 +44,5 @@ The primary intended users of the model are researchers and hobbyists in natural
|
|
48 |
70K conversations collected from ShareGPT.com.
|
49 |
|
50 |
## Evaluation dataset
|
51 |
-
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs.
|
|
|
|
1 |
---
|
|
|
2 |
inference: false
|
3 |
---
|
4 |
|
5 |
**NOTE: New version available**
|
6 |
+
Please check out a newer version of the weights [here](https://huggingface.co/lmsys/vicuna-13b-v1.3).
|
7 |
+
If you still want to use this old version, please see the compatibility and difference between different versions [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
|
8 |
|
9 |
**NOTE: This "delta model" cannot be used directly.**
|
10 |
+
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See [instructions](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#how-to-apply-delta-weights-for-weights-v11-and-v0).
|
|
|
11 |
|
12 |
<br>
|
13 |
<br>
|
|
|
27 |
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.
|
28 |
|
29 |
**Paper or resources for more information:**
|
30 |
+
https://lmsys.org/blog/2023-03-30-vicuna/
|
|
|
|
|
|
|
31 |
|
32 |
**Where to send questions or comments about the model:**
|
33 |
https://github.com/lm-sys/FastChat/issues
|
34 |
|
35 |
+
|
36 |
## Intended use
|
37 |
**Primary intended uses:**
|
38 |
The primary use of Vicuna is research on large language models and chatbots.
|
|
|
44 |
70K conversations collected from ShareGPT.com.
|
45 |
|
46 |
## Evaluation dataset
|
47 |
+
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs.
|
48 |
+
See https://lmsys.org/blog/2023-03-30-vicuna/ for more details.
|