Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Tiny-Knight-1.1b-v0.1 - GGUF

Original model description:

license: cc-by-nc-4.0 language: - en widget: - text: | Hail and well met! Pray, what kind of food do ye enjoy supping upon? example_title: "The Code of Chivalry"

tinyknight.png

Tiny Knight-1.1b-v0.1

Tiny Knight-1.1b-v0.1 is a specialized language model crafted for generating knight and medieval-themed content. This iteration is built upon the foundations of TinyLlama-1.1B-Chat-v1.0, tailored to operate within environments constrained by computing resources.

Performance

While this model excels in creating knight-themed narratives, its specialization, however, limits its effectiveness in broader language tasks, especially those requiring detailed knowledge outside the medieval theme.

Direct Use

Tiny Knight-1.1b-v0.1 is particularly suited for generating content within medieval, knightly, or fantasy settings, ideal for storytelling, educational content, and thematic exploration. It is not recommended for general-purpose tasks or technical domains.

Context Setting and Interaction Guidelines

Given its specialized nature, Tiny Knight-1.1b-v0.1 benefits significantly from detailed context-setting. Providing a rich thematic backdrop in prompts enhances the model's performance, guiding it to generate more accurate and immersive content.

Training Data

Incorporates a dataset focused on knightly tales, medieval history, and literature, derived from the foundational TinyLlama-1.1B model.

Custom Stopping Strings

Custom stopping strings were used to refine output quality:

  • "},"
  • "User:"
  • "You:"
  • "\nUser"
  • "\nUser:"
  • "me:"
  • "user"
  • "\n"

Training Hyperparameters and Fine-Tuning Details

  • Base Model Name: TinyLlama-1.1B-Chat-v1.0
  • Base Model Class: LlamaForCausalLM
  • Projections: gate, down, up, q, k, v, o
  • LoRA Rank: 16
  • LoRA Alpha: 32
  • True Batch Size: 32
  • Gradient Accumulation Steps: 1
  • Epochs: 0.18
  • Learning Rate: 3e-4
  • LR Scheduler: Linear
  • Step: 75
  • Loss: 1.87

Limitations

While adept at producing themed content, Tiny Knight-1.1b-v0.1's applicability is limited outside its specialized domain of knights and medieval themes.

Summary

Tiny Knight-1.1b-v0.1 represents a significant advancement in thematic language models, offering a specialized tool for exploring the medieval era. Its emphasis on context for optimal performance and the use of custom stopping strings make it a sophisticated asset for generating historically rich content.

Acknowledgments

Special thanks to the TinyLlama-1.1B team, whose pioneering work laid the groundwork for the creation of Tiny Knight-1.1b-v0.1.

Downloads last month
19
GGUF
Model size
1.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .