--- model_creator: Nekochu quantized_by: Nekochu model_name: Luminia 13B v3 pretty_name: Luminia model_type: llama2 prompt_template: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {Instruction} {summary} ### input: {category} ### Response: {prompt} base_model: meta-llama/Llama-2-13b-chat-hf library_name: peft license: apache-2.0 datasets: - Nekochu/discord-unstable-diffusion-SD-prompts - glaiveai/glaive-function-calling-v2 - TIGER-Lab/MathInstruct - Open-Orca/SlimOrca - GAIR/lima - sahil2801/CodeAlpaca-20k - garage-bAInd/Open-Platypus language: - en pipeline_tag: text-generation task_categories: - question-answering - text2text-generation - conversational inference: True widget: - example_title: prompt assistant messages: - role: system content: Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. - role: user content: "### Instruction:\nCreate stable diffusion metadata based on the given english description. Luminia\n### Input:\nfavorites and popular SFW\n### Response:\n" output: text: Luminia, 1girl, solo, blonde hair, long hair, tags: - llama-factory - lora - generated_from_trainer - llama2 - llama - instruct - finetune - gpt4 - synthetic data - stable diffusion - alpaca - llm model-index: - name: Luminia-13B-v3 results: [] ---
DALL-E 3 prompt: a single seed growing slowly in laboratory in a desert sand, the single little plant try fight to reach light sun, while a little cute kitty feel the plant, cute 8k anime, digitral art, close up

Luminia v3 is good at reasoning to enhance Stable Diffusion prompt from short summary description, may output NSFW content.

LoRa is include and Quants: exllamav2 [2.4bpw-h6](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/8.0bpw-h8) | GGUF [Q4_K_M](https://huggingface.co/Nekochu/Luminia-13B-v3/blob/main/Luminia-13B-v3-Q4_K_M.gguf), [IQ4_NL](https://huggingface.co/Nekochu/Luminia-13B-v3/blob/main/Luminia-13B-v3-IQ4_NL.gguf) | ## Prompt template: Alpaca
Output example tested In text-generation-webui | Input | base llama-2-chat | QLoRa | |:---------:|:-------:|:---------:| | [question]:

Create stable diffusion metadata based on the given english description. Luminia \n### Input:\n favorites and popular SFW | Answer:

Luminia, a mystical world of wonder and magic 🧝‍♀️✨ A place where technology and nature seamlessly blend together ... | Answer!

< lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle .... | Output prompt from QLoRa to [A1111/SD-WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui):
parameters image metadata: <lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle, shine, sparkle, dark theme, fantasy, magic, goddess, celestial, nature, peaceful, serene, tranquil, mystical, enchanting, otherworldly, mysterious, captivating, alluring, beautiful, elegant, graceful, majestic, divine, powerful, epic, grand, sweeping, breathtaking, mesmerizing, magical, fantastical, wondrous, marvelous, extraordinary, magnificent, glorious, radiant, luminous, illumination, brilliance, glow, radiance, luminescence, brightness, splendor, glory, triumph, victory, achievement, honor, celebration, recognition, praise, admiration, appreciation, love, affection, devotion, loyalty, dedication, commitment, passion, intensity, drive, determination, energy, enthusiasm, excitement, joy, happiness, fulfillment, pleasure, enjoyment, satisfaction, delight, wonder, amazement, awe, curiosity, interest, intrigue, question, exploration, discovery, adventure, journey, path, road, trail, course, pursuit, challenge, obstacle, adversity, hardship, struggle, perseverance, resilience, tenacity, courage, bravery, heroism, inspiration, motivation, spirit, heart, soul, essence, creativity, imagination, dreams, aspirations, goals, ambition, vision, purpose, meaning, significance, relevance, importance, impact, influence, change, growth, development, evolution, improvement, progress, learning, knowledge, wisdom, insight, understanding, empathy, compassion, kindness, generosity, forgiveness, gratitude, humility, patience, tolerance, acceptance, diversity, inclusivity, unity, equality, justice, fairness, honesty, integrity, accountability, responsibility, morality, ethics, principles, values, beliefs, faith, hope, optimism,
Steps: 20, Sampler: Heun, CFG scale: 7, Seed: 479539365, Size: 512x512, Model hash: 84d76a0328, Model: epicrealism_naturalSinRC1VAE, Version: v1.7.0
parameters image metadata: <lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle, shine, sparkle, dark theme, fantasy, magic, goddess, celestial, nature, peaceful, serene, tranquil, mystical, enchanting, otherworldly, mysterious, captivating, alluring, beautiful, elegant, graceful, majestic, divine, powerful, epic, grand, sweeping, breathtaking, mesmerizing, magical, fantastical, wondrous, marvelous, extraordinary, magnificent, glorious, radiant, luminous, illumination, brilliance, glow, radiance, luminescence, brightness, splendor, glory, triumph, victory, achievement, honor, celebration, recognition, praise, admiration, appreciation, love, affection, devotion, loyalty, dedication, commitment, passion, intensity, drive, determination, energy, enthusiasm, excitement, joy, happiness, fulfillment, pleasure, enjoyment, satisfaction, delight, wonder, amazement, awe, curiosity, interest, intrigue, question, exploration, discovery, adventure, journey, path, road, trail, course, pursuit, challenge, obstacle, adversity, hardship, struggle, perseverance, resilience, tenacity, courage, bravery, heroism, inspiration, motivation, spirit, heart, soul, essence, creativity, imagination, dreams, aspirations, goals, ambition, vision, purpose, meaning, significance, relevance, importance, impact, influence, change, growth, development, evolution, improvement, progress, learning, knowledge, wisdom, insight, understanding, empathy, compassion, kindness, generosity, forgiveness, gratitude, humility, patience, tolerance, acceptance, diversity, inclusivity, unity, equality, justice, fairness, honesty, integrity, accountability, responsibility, morality, ethics, principles, values, beliefs, faith, hope, optimism
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 959582434, Size: 512x512, Model hash: 84d76a0328, Model: epicrealism_naturalSinRC1VAE, Version: v1.7.0
#### Full Prompt ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Create stable diffusion metadata based on the given english description. Luminia ### Input: favorites and popular SFW ### Response: ``` "Luminia" can be any short description, more info on my SD dataset [here](https://huggingface.co/datasets/Nekochu/discord-unstable-diffusion-SD-prompts#dataset-description).
## Training Details
Click to see details ### Model Description - **Train by:** [Nekochu](https://huggingface.co/Nekochu), **Model type:** Llama, **Finetuned from model [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)** - Continue from the base of LoRA Luminia-13B-v2-QLora Know issue: [issue] ### Trainer - hiyouga/LLaMA-Efficient-Tuning Hardware: QLoRA training OS Windows, Python 3.10.8, CUDA 12.1 on 24GB VRAM. ### Training hyperparameters The following hyperparameters were used during training: - num_epochs: 1.0 - finetuning_type: lora - quantization_bit: 4 - stage: sft - learning_rate: 5e-05 - cutoff_len: 4096 - num_train_epochs: 3.0 - max_samples: 100000 - warmup_steps: 0 - train_batch_size: 1 - distributed_type: single-GPU - num_devices: 1 - warmup_steps: 0 - rope_scaling: linear - lora_rank: 32 - lora_target: all - lora_dropout: 0.15 - bnb_4bit_compute_dtype: bfloat16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine #### training_loss:
Nekochu
### Framework versions - PEFT 0.9.0 - Transformers 4.38.1 - Pytorch 2.1.2+cu121 - Datasets 2.14.5 - Tokenizers 0.15.0