parameters guide
samplers guide
model generation
role play settings
quant selection
arm quants
iq quants vs q quants
optimal model setting
gibberish fixes
coherence
instructing following
quality generation
chat settings
quality settings
llamacpp server
llamacpp
lmstudio
sillytavern
koboldcpp
backyard
ollama
model generation steering
steering
model generation fixes
text generation webui
ggufs
exl2
full precision
quants
imatrix
neo imatrix
Update README.md
Browse files
README.md
CHANGED
@@ -142,6 +142,8 @@ https://github.com/ggerganov/llama.cpp
|
|
142 |
|
143 |
(scroll down on the main page for more apps/programs to use GGUFs too that connect to / use the LLAMA-CPP package.)
|
144 |
|
|
|
|
|
145 |
DETAILED NOTES ON PARAMETERS, SAMPLERS and ADVANCED SAMPLERS:
|
146 |
|
147 |
For additional details on these samplers settings (including advanced ones) you may also want to check out:
|
@@ -150,15 +152,21 @@ https://github.com/oobabooga/text-generation-webui/wiki/03-%E2%80%90-Parameters-
|
|
150 |
|
151 |
(NOTE: Not all of these "options" are available for GGUFS, including when you use "llamacpp_HF" loader in "text-generation-webui" )
|
152 |
|
153 |
-
Additional Links:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
154 |
|
155 |
-
|
156 |
-
=> DRY => https://www.reddit.com/r/KoboldAI/comments/1e49vpt/dry_sampler_questionsthat_im_sure_most_of_us_are/
|
157 |
-
=> DRY => https://www.reddit.com/r/KoboldAI/comments/1eo4r6q/dry_settings_questions/
|
158 |
-
=> Samplers (videos) : https://gist.github.com/kalomaze/4473f3f975ff5e5fade06e632498f73e
|
159 |
-
=> Creative Writing -> https://www.reddit.com/r/LocalLLaMA/comments/1c36ieb/comparing_sampling_techniques_for_creative/
|
160 |
-
=> Parameters => https://arxiv.org/html/2408.13586v1
|
161 |
-
=> Stats on some parameters => https://github.com/ZhouYuxuanYX/Benchmarking-and-Guiding-Adaptive-Sampling-Decoding-for-LLMs
|
162 |
|
163 |
---
|
164 |
|
|
|
142 |
|
143 |
(scroll down on the main page for more apps/programs to use GGUFs too that connect to / use the LLAMA-CPP package.)
|
144 |
|
145 |
+
---
|
146 |
+
|
147 |
DETAILED NOTES ON PARAMETERS, SAMPLERS and ADVANCED SAMPLERS:
|
148 |
|
149 |
For additional details on these samplers settings (including advanced ones) you may also want to check out:
|
|
|
152 |
|
153 |
(NOTE: Not all of these "options" are available for GGUFS, including when you use "llamacpp_HF" loader in "text-generation-webui" )
|
154 |
|
155 |
+
Additional Links (on parameters, samplers and advanced samplers):
|
156 |
+
|
157 |
+
DRY => https://github.com/oobabooga/text-generation-webui/pull/5677
|
158 |
+
|
159 |
+
DRY => https://www.reddit.com/r/KoboldAI/comments/1e49vpt/dry_sampler_questionsthat_im_sure_most_of_us_are/
|
160 |
+
|
161 |
+
DRY => https://www.reddit.com/r/KoboldAI/comments/1eo4r6q/dry_settings_questions/
|
162 |
+
|
163 |
+
Samplers : https://gist.github.com/kalomaze/4473f3f975ff5e5fade06e632498f73e
|
164 |
+
|
165 |
+
Creative Writing -> https://www.reddit.com/r/LocalLLaMA/comments/1c36ieb/comparing_sampling_techniques_for_creative/
|
166 |
+
|
167 |
+
General Parameters => https://arxiv.org/html/2408.13586v1
|
168 |
|
169 |
+
Benchmarking-and-Guiding-Adaptive-Sampling-Decoding https://github.com/ZhouYuxuanYX/Benchmarking-and-Guiding-Adaptive-Sampling-Decoding-for-LLMs
|
|
|
|
|
|
|
|
|
|
|
|
|
170 |
|
171 |
---
|
172 |
|