Initial merged FP16 model commit
Browse files
README.md
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: other
|
4 |
+
---
|
5 |
+
|
6 |
+
<!-- header start -->
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
<!-- header end -->
|
19 |
+
|
20 |
+
# LmSys' Vicuna 13B 1.3.0 merged with Kaio Ken's SuperHOT 8K fp16
|
21 |
+
|
22 |
+
These files are GPTQ 4bit model files for [LmSys' Vicuna 13B 1.3.0 merged with Kaio Ken's SuperHOT 8K](https://huggingface.co/lmsys/vicuna-13b-v1.3) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test).
|
23 |
+
|
24 |
+
[Kaio Ken's SuperHOT 30B LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
|
25 |
+
|
26 |
+
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
|
27 |
+
|
28 |
+
## Repositories available
|
29 |
+
|
30 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/lmsys/vicuna-13b-v1.3)
|
31 |
+
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/%%REPO_FP16%%)
|
32 |
+
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.3)
|
33 |
+
|
34 |
+
<!-- footer start -->
|
35 |
+
## Discord
|
36 |
+
|
37 |
+
For further support, and discussions on these models and AI in general, join us at:
|
38 |
+
|
39 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
40 |
+
|
41 |
+
## Thanks, and how to contribute.
|
42 |
+
|
43 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
44 |
+
|
45 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
46 |
+
|
47 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
48 |
+
|
49 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
50 |
+
|
51 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
52 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
53 |
+
|
54 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
55 |
+
|
56 |
+
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
|
57 |
+
|
58 |
+
Thank you to all my generous patrons and donaters!
|
59 |
+
|
60 |
+
<!-- footer end -->
|
61 |
+
|
62 |
+
# Original model card: Kaio Ken's SuperHOT 8K
|
63 |
+
|
64 |
+
### SuperHOT Prototype 2 w/ 8K Context
|
65 |
+
|
66 |
+
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
|
67 |
+
Tests have shown that the model does indeed leverage the extended context at 8K.
|
68 |
+
|
69 |
+
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
|
70 |
+
|
71 |
+
#### Looking for Merged & Quantized Models?
|
72 |
+
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
|
73 |
+
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
|
74 |
+
|
75 |
+
|
76 |
+
#### Training Details
|
77 |
+
I trained the LoRA with the following configuration:
|
78 |
+
- 1200 samples (~400 samples over 2048 sequence length)
|
79 |
+
- learning rate of 3e-4
|
80 |
+
- 3 epochs
|
81 |
+
- The exported modules are:
|
82 |
+
- q_proj
|
83 |
+
- k_proj
|
84 |
+
- v_proj
|
85 |
+
- o_proj
|
86 |
+
- no bias
|
87 |
+
- Rank = 4
|
88 |
+
- Alpha = 8
|
89 |
+
- no dropout
|
90 |
+
- weight decay of 0.1
|
91 |
+
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
|
92 |
+
- Trained on 4-bit base model
|
93 |
+
|
94 |
+
# Original model card: LmSys' Vicuna 13B 1.3.0 merged with Kaio Ken's SuperHOT 8K
|
95 |
+
|
96 |
+
|
97 |
+
# Vicuna Model Card
|
98 |
+
|
99 |
+
## Model Details
|
100 |
+
|
101 |
+
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
|
102 |
+
|
103 |
+
- **Developed by:** [LMSYS](https://lmsys.org/)
|
104 |
+
- **Model type:** An auto-regressive language model based on the transformer architecture.
|
105 |
+
- **License:** Non-commercial license
|
106 |
+
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
|
107 |
+
|
108 |
+
### Model Sources
|
109 |
+
|
110 |
+
- **Repository:** https://github.com/lm-sys/FastChat
|
111 |
+
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
|
112 |
+
- **Paper:** https://arxiv.org/abs/2306.05685
|
113 |
+
- **Demo:** https://chat.lmsys.org/
|
114 |
+
|
115 |
+
## Uses
|
116 |
+
|
117 |
+
The primary use of Vicuna is research on large language models and chatbots.
|
118 |
+
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
|
119 |
+
|
120 |
+
## How to Get Started with the Model
|
121 |
+
|
122 |
+
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
|
123 |
+
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
|
124 |
+
|
125 |
+
## Training Details
|
126 |
+
|
127 |
+
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
|
128 |
+
The training data is around 140K conversations collected from ShareGPT.com.
|
129 |
+
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
|
130 |
+
|
131 |
+
## Evaluation
|
132 |
+
|
133 |
+
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
|
134 |
+
|
135 |
+
## Difference between different versions of Vicuna
|
136 |
+
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|