YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Sparky-SQL-Llama-3.2-1B - GGUF
- Model creator: https://huggingface.co/itsme-nishanth/
- Original model: https://huggingface.co/itsme-nishanth/Sparky-SQL-Llama-3.2-1B/
Name | Quant method | Size |
---|---|---|
Sparky-SQL-Llama-3.2-1B.Q2_K.gguf | Q2_K | 0.54GB |
Sparky-SQL-Llama-3.2-1B.Q3_K_S.gguf | Q3_K_S | 0.6GB |
Sparky-SQL-Llama-3.2-1B.Q3_K.gguf | Q3_K | 0.64GB |
Sparky-SQL-Llama-3.2-1B.Q3_K_M.gguf | Q3_K_M | 0.64GB |
Sparky-SQL-Llama-3.2-1B.Q3_K_L.gguf | Q3_K_L | 0.68GB |
Sparky-SQL-Llama-3.2-1B.IQ4_XS.gguf | IQ4_XS | 0.7GB |
Sparky-SQL-Llama-3.2-1B.Q4_0.gguf | Q4_0 | 0.72GB |
Sparky-SQL-Llama-3.2-1B.IQ4_NL.gguf | IQ4_NL | 0.72GB |
Sparky-SQL-Llama-3.2-1B.Q4_K_S.gguf | Q4_K_S | 0.72GB |
Sparky-SQL-Llama-3.2-1B.Q4_K.gguf | Q4_K | 0.75GB |
Sparky-SQL-Llama-3.2-1B.Q4_K_M.gguf | Q4_K_M | 0.75GB |
Sparky-SQL-Llama-3.2-1B.Q4_1.gguf | Q4_1 | 0.77GB |
Sparky-SQL-Llama-3.2-1B.Q5_0.gguf | Q5_0 | 0.83GB |
Sparky-SQL-Llama-3.2-1B.Q5_K_S.gguf | Q5_K_S | 0.83GB |
Sparky-SQL-Llama-3.2-1B.Q5_K.gguf | Q5_K | 0.85GB |
Sparky-SQL-Llama-3.2-1B.Q5_K_M.gguf | Q5_K_M | 0.85GB |
Sparky-SQL-Llama-3.2-1B.Q5_1.gguf | Q5_1 | 0.89GB |
Sparky-SQL-Llama-3.2-1B.Q6_K.gguf | Q6_K | 0.95GB |
Sparky-SQL-Llama-3.2-1B.Q8_0.gguf | Q8_0 | 1.23GB |
Original model description:
library_name: transformers tags: - unsloth - Sparky-SQL-Llama-3.2-1B license: apache-2.0 datasets: - shreeyashm/SQL-Queries-Dataset language: - en base_model: - unsloth/Llama-3.2-1B pipeline_tag: text-generation
Model Card for Model ID
Model Details
Model Description
The model was fintuned from Llama-3.2-1B base model with SQL query data
How to Get Started with the Model
from transformers import pipeline
model_id = "itsme-nishanth/Sparky-SQL-Llama-3.2-1B"
pipe = pipeline("text-generation", model_id, device="cuda")
messages = [
{"role": "user", "content": "list down the product names and its type provided by vendor 'vanhelsing' from 'products' table?"},
]
print(pipe(messages, max_new_tokens=100)[0]['generated_text'][-1]) # Print the assistant's response
- Developed by: Nishanth
- Model type: Llama
- Language(s) (NLP): English
- License: Apache license 2.0
- Finetuned from model : Llama-3.2-1B
Training Details
Training Data
Training Procedure
Preprocessing
Dataset had empty records. Removed them before training.
Training Hyperparameters
- Training regime:
- gradient_accumulation_steps = 4,
- warmup_steps = 5,
- max_steps = 60,
- learning_rate = 2e-4,
- fp16 = not is_bfloat16_supported(),
- bf16 = is_bfloat16_supported(),
- optim = "adamw_8bit",
- weight_decay = 0.01,
- lr_scheduler_type = "linear",
- seed = 3407
Technical Specifications
Hardware
- Google-Colab (Tesla T4)
Software
- Transformers
- Unsloth
Model Card Contact
- Downloads last month
- 30