LLM coping mechanisms - Part 5
pinned
145
#12 opened 5 months ago
by
Lewdiculous
Llama-3 SillyTavern Presets Sharing
pinned
38
#5 opened 6 months ago
by
Lewdiculous
Is it normal that Gemma models do not work with kv-cache?
#19 opened 3 months ago
by
SolidSnacke
Emphasis DFSM - Nexesenex/kobold.cpp
2
#18 opened 4 months ago
by
Lewdiculous
[llama.cpp PR#7527] GGUF Quantized KV Support
22
#15 opened 5 months ago
by
Lewdiculous
[llama.cpp PR#6844] Custom Quantizations
6
#8 opened 6 months ago
by
Virt-io
Sampling Resources and Conjecture
43
#2 opened 7 months ago
by
Clevyby