File size: 2,564 Bytes
e5d970e
 
 
 
 
 
 
 
 
 
 
5db752d
 
 
 
 
 
0ddb21f
e5d970e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b4f9d1a
e5d970e
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: apache-2.0
inference: true
---

**NOTE: This GGML conversion is primarily for use with llama.cpp.**  
- 7B parameters
- 4-bit quantized
- Based on version 1.1
- Used PR "More accurate Q4_0 and Q4_1 quantizations #896" (should be closer in quality to unquantized)
- Uncensored variant is available, but it's based on version 1.0
- For q4_2, "Q4_2 ARM #1046" was used. Will update regularly if new changes are made.
- **Choosing between q4_0, q4_1, and q4_2:**
  - 4_0 is the fastest. The quality is the poorest.
  - 4_1 is a lot slower. The quality is noticeably better.
  - 4_2 is almost as fast as 4_0 and about as good as 4_1 **on Apple Silicon**. On Intel/AMD it's hardly better or faster than 4_1.

- 13B version of this can be found here: https://huggingface.co/eachadea/ggml-vicuna-13b-1.1
<br>
<br>

# Vicuna Model Card

## Model details

**Model type:**
Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
It is an auto-regressive language model, based on the transformer architecture.

**Model date:**
Vicuna was trained between March 2023 and April 2023.

**Organizations developing the model:**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.

**Paper or resources for more information:**
https://vicuna.lmsys.org/

**License:**
Apache License 2.0

**Where to send questions or comments about the model:**
https://github.com/lm-sys/FastChat/issues

## Intended use
**Primary intended uses:**
The primary use of Vicuna is research on large language models and chatbots.

**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.

## Training dataset
70K conversations collected from ShareGPT.com.
(48k for the uncensored variant. 22k worth of garbage removed – see https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered)

## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.

## Major updates of weights v1.1
- Refactor the tokenization and separator. In Vicuna v1.1, the separator has been changed from `"###"` to the EOS token `"</s>"`. This change makes it easier to determine the generation stop criteria and enables better compatibility with other libraries.
- Fix the supervised fine-tuning loss computation for better model quality.