inference: false
license: other
WBrown's Cassandra 6.9B GGML
These files are GGML format model files for WBrown's Cassandra 6.9B.
Please note that these GGMLs are not compatible with llama.cpp, or currently with text-generation-webui. Please see below for a list of tools known to work with these model files.
Repositories available
- 4-bit GPTQ models for GPU inference
- 2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference
- Unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template: NovelAI-style
Examples:
[ Author: Haven Ross; Title: The Biker's Buddy; Tags: friendship, slice of life, funny; Genre: thriller ]
[Style: text adventure]
The Deepest Dungeon, a Text Adventure
> examine room
If using the latter, set >
as a stop token in your UI.
Compatibilty
These files are not compatible with llama.cpp.
Currently they can be used with:
- KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: KoboldCpp
- The ctransformers Python library, which includes LangChain support: ctransformers
- The LoLLMS Web UI which uses ctransformers: LoLLMS Web UI
- rustformers' llm
- The example
gpt-neox
binary provided with ggml
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
Tutorial for using LoLLMS Web UI
Provided files
Name | Quant method | Bits | Size | Max RAM required | Use case |
---|---|---|---|---|---|
cassandra-6.9b.ggmlv0.q4_0.bin | q4_0 | 4 | 3.86 GB | 6.36 GB | Original llama.cpp quant method, 4-bit. |
cassandra-6.9b.ggmlv0.q4_1.bin | q4_1 | 4 | 4.29 GB | 6.79 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
cassandra-6.9b.ggmlv0.q5_0.bin | q5_0 | 5 | 4.72 GB | 7.22 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
cassandra-6.9b.ggmlv0.q5_1.bin | q5_1 | 5 | 5.15 GB | 7.65 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
cassandra-6.9b.ggmlv0.q8_0.bin | q8_0 | 8 | 7.29 GB | 9.79 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute.
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
Patreon special mentions: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
Original model card: WBrown's Cassandra 6.9B
None provided.