Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Tiny-Knight-1.1b-v0.1 - GGUF - Model creator: https://huggingface.co/phanerozoic/ - Original model: https://huggingface.co/phanerozoic/Tiny-Knight-1.1b-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Tiny-Knight-1.1b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q2_K.gguf) | Q2_K | 0.4GB | | [Tiny-Knight-1.1b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [Tiny-Knight-1.1b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.IQ3_S.gguf) | IQ3_S | 0.47GB | | [Tiny-Knight-1.1b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [Tiny-Knight-1.1b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.IQ3_M.gguf) | IQ3_M | 0.48GB | | [Tiny-Knight-1.1b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q3_K.gguf) | Q3_K | 0.51GB | | [Tiny-Knight-1.1b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [Tiny-Knight-1.1b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [Tiny-Knight-1.1b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [Tiny-Knight-1.1b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q4_0.gguf) | Q4_0 | 0.59GB | | [Tiny-Knight-1.1b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [Tiny-Knight-1.1b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [Tiny-Knight-1.1b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q4_K.gguf) | Q4_K | 0.62GB | | [Tiny-Knight-1.1b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [Tiny-Knight-1.1b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q4_1.gguf) | Q4_1 | 0.65GB | | [Tiny-Knight-1.1b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q5_0.gguf) | Q5_0 | 0.71GB | | [Tiny-Knight-1.1b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [Tiny-Knight-1.1b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q5_K.gguf) | Q5_K | 0.73GB | | [Tiny-Knight-1.1b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [Tiny-Knight-1.1b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q5_1.gguf) | Q5_1 | 0.77GB | | [Tiny-Knight-1.1b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q6_K.gguf) | Q6_K | 0.84GB | | [Tiny-Knight-1.1b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/phanerozoic_-_Tiny-Knight-1.1b-v0.1-gguf/blob/main/Tiny-Knight-1.1b-v0.1.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: cc-by-nc-4.0 language: - en widget: - text: | Hail and well met! Pray, what kind of food do ye enjoy supping upon? example_title: "The Code of Chivalry" --- ![tinyknight.png](https://huggingface.co/phanerozoic/Tiny-Knight-1.1b-v0.1/resolve/bbfc665ce3af9d4d73e199b89144b30d668f50aa/tinyknight.png) # Tiny Knight-1.1b-v0.1 Tiny Knight-1.1b-v0.1 is a specialized language model crafted for generating knight and medieval-themed content. This iteration is built upon the foundations of TinyLlama-1.1B-Chat-v1.0, tailored to operate within environments constrained by computing resources. ### Performance While this model excels in creating knight-themed narratives, its specialization, however, limits its effectiveness in broader language tasks, especially those requiring detailed knowledge outside the medieval theme. ### Direct Use Tiny Knight-1.1b-v0.1 is particularly suited for generating content within medieval, knightly, or fantasy settings, ideal for storytelling, educational content, and thematic exploration. It is not recommended for general-purpose tasks or technical domains. ### Context Setting and Interaction Guidelines Given its specialized nature, Tiny Knight-1.1b-v0.1 benefits significantly from detailed context-setting. Providing a rich thematic backdrop in prompts enhances the model's performance, guiding it to generate more accurate and immersive content. ### Training Data Incorporates a dataset focused on knightly tales, medieval history, and literature, derived from the foundational TinyLlama-1.1B model. ### Custom Stopping Strings Custom stopping strings were used to refine output quality: - "}," - "User:" - "You:" - "\nUser" - "\nUser:" - "me:" - "user" - "\n" ### Training Hyperparameters and Fine-Tuning Details - **Base Model Name**: TinyLlama-1.1B-Chat-v1.0 - **Base Model Class**: LlamaForCausalLM - **Projections**: gate, down, up, q, k, v, o - **LoRA Rank**: 16 - **LoRA Alpha**: 32 - **True Batch Size**: 32 - **Gradient Accumulation Steps**: 1 - **Epochs**: 0.18 - **Learning Rate**: 3e-4 - **LR Scheduler**: Linear - **Step**: 75 - **Loss**: 1.87 ### Limitations While adept at producing themed content, Tiny Knight-1.1b-v0.1's applicability is limited outside its specialized domain of knights and medieval themes. ### Summary Tiny Knight-1.1b-v0.1 represents a significant advancement in thematic language models, offering a specialized tool for exploring the medieval era. Its emphasis on context for optimal performance and the use of custom stopping strings make it a sophisticated asset for generating historically rich content. ### Acknowledgments Special thanks to the TinyLlama-1.1B team, whose pioneering work laid the groundwork for the creation of Tiny Knight-1.1b-v0.1.