Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: other
|
4 |
+
---
|
5 |
+
|
6 |
+
<!-- header start -->
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
<!-- header end -->
|
19 |
+
|
20 |
+
# rewoo's Planner 7B GGML
|
21 |
+
|
22 |
+
These files are fp16 pytorch format model files for [rewoo's Planner 7B](https://huggingface.co/rewoo/planner_7B).
|
23 |
+
|
24 |
+
They are the result of merging the LoRA adapter at the above repo with the base LLaMa 7B model.
|
25 |
+
|
26 |
+
## Repositories available
|
27 |
+
|
28 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Planner-7B-GPTQ)
|
29 |
+
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Planner-7B-GGML)
|
30 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://TheBloke/Planner-7B-fp16)
|
31 |
+
|
32 |
+
<!-- footer start -->
|
33 |
+
## Discord
|
34 |
+
|
35 |
+
For further support, and discussions on these models and AI in general, join us at:
|
36 |
+
|
37 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
38 |
+
|
39 |
+
## Thanks, and how to contribute.
|
40 |
+
|
41 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
42 |
+
|
43 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
44 |
+
|
45 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
46 |
+
|
47 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
48 |
+
|
49 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
50 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
51 |
+
|
52 |
+
**Special thanks to**: Luke from CarbonQuill; Aemon Algiz, Dmitriy Samsonov.
|
53 |
+
|
54 |
+
**Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
|
55 |
+
|
56 |
+
Thank you to all my generous patrons and donaters!
|
57 |
+
|
58 |
+
<!-- footer end -->
|
59 |
+
|
60 |
+
# Original model card: rewoo's Planner 7B
|
61 |
+
|
62 |
+
|
63 |
+
Alpaca Lora adapter weight fine-tuned on following instruction dataset.
|
64 |
+
|
65 |
+
https://huggingface.co/datasets/rewoo/planner_instruction_tuning_2k/blob/main/README.md
|
66 |
+
|
67 |
+
Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
|
68 |
+
|
69 |
+
We use following parameter.
|
70 |
+
|
71 |
+
```
|
72 |
+
python finetune.py \
|
73 |
+
--base_model 'decapoda-research/llama-7b-hf' \
|
74 |
+
--data_path 'rewoo/planner_instruction_tuning_2k' \
|
75 |
+
--output_dir './lora-alpaca-planner' \
|
76 |
+
--batch_size 128 \
|
77 |
+
--micro_batch_size 8 \
|
78 |
+
--num_epochs 10 \
|
79 |
+
--learning_rate 1e-4 \
|
80 |
+
--cutoff_len 1024 \
|
81 |
+
--val_set_size 200 \
|
82 |
+
--lora_r 8 \
|
83 |
+
--lora_alpha 16 \
|
84 |
+
--lora_dropout 0.05 \
|
85 |
+
--lora_target_modules '[q_proj,v_proj]' \
|
86 |
+
--train_on_inputs \
|
87 |
+
--group_by_length \
|
88 |
+
--resume_from_checkpoint 'tloen/alpaca-lora-7b'
|
89 |
+
```
|