Update README.md
Browse files
README.md
CHANGED
@@ -4,17 +4,19 @@ datasets:
|
|
4 |
- c-s-ale/alpaca-gpt4-data
|
5 |
pipeline_tag: text2text-generation
|
6 |
---
|
|
|
7 |
<div style="width: 100%;">
|
8 |
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
-
<p><a href="https://discord.gg/
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon
|
16 |
</div>
|
17 |
</div>
|
|
|
18 |
|
19 |
## GPT4-Alpaca-LoRA_MLP-65B GPTQ
|
20 |
|
@@ -26,17 +28,30 @@ These files are the result of merging the [LoRA weights of chtan's gpt4-alpaca-l
|
|
26 |
* [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-GGML)
|
27 |
* [float16 unquantised model for GPU inference and further conversions](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF)
|
28 |
|
29 |
-
|
|
|
30 |
|
31 |
-
|
32 |
|
33 |
-
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
39 |
-
|
|
|
|
|
|
|
|
|
40 |
# Original model card
|
41 |
|
42 |
This repo provides the training checkpoint of LLaMA on the alpaca_data_gpt4 dataset via LoRA [MLP] on 8xA100(80G).
|
|
|
4 |
- c-s-ale/alpaca-gpt4-data
|
5 |
pipeline_tag: text2text-generation
|
6 |
---
|
7 |
+
<!-- header start -->
|
8 |
<div style="width: 100%;">
|
9 |
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
10 |
</div>
|
11 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
12 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
13 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
14 |
</div>
|
15 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
16 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
17 |
</div>
|
18 |
</div>
|
19 |
+
<!-- header end -->
|
20 |
|
21 |
## GPT4-Alpaca-LoRA_MLP-65B GPTQ
|
22 |
|
|
|
28 |
* [4bit and 5bit GGML models for CPU inference in llama.cpp](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-GGML)
|
29 |
* [float16 unquantised model for GPU inference and further conversions](https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-HF)
|
30 |
|
31 |
+
<!-- footer start -->
|
32 |
+
## Discord
|
33 |
|
34 |
+
For further support, and discussions on these models and AI in general, join us at:
|
35 |
|
36 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
37 |
|
38 |
+
## Thanks, and how to contribute.
|
39 |
|
40 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
41 |
+
|
42 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
43 |
+
|
44 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
45 |
+
|
46 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
47 |
+
|
48 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
49 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
50 |
+
|
51 |
+
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
|
52 |
+
|
53 |
+
Thank you to all my generous patrons and donaters!
|
54 |
+
<!-- footer end -->
|
55 |
# Original model card
|
56 |
|
57 |
This repo provides the training checkpoint of LLaMA on the alpaca_data_gpt4 dataset via LoRA [MLP] on 8xA100(80G).
|