Uploaded model
- Developed by: zayedansari
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Formula1Model ποΈ
An expert Formula 1 assistant fine-tuned on the 2024 Formula 1 Championship dataset (vibingshu/2024_formula1_championship_dataset).
This model was fine-tuned using Unsloth and exported in 8-bit (Q8_0
) format for efficient local inference with Ollama.
π§ Model Details
- Base Model: LLaMA 3.2 (fine-tuned with Unsloth)
- Dataset: 2024 F1 results, drivers, constructors, and races
- Format: GGUF (
Q8_0
) - Task: Question answering & expert analysis on Formula 1
- Use Case: F1 trivia, race insights, driver/team history, strategy-style Q&A
π Training
Hardware: Google Colab (T4 / A100, depending on availability) Tools Used: Unsloth, Hugging Face datasets, LoRA adapters Precision: 8-bit (Q8_0) for efficient inference
π Usage
ollama pull zayedansari/Formula1Model
ollama run zayedansari/Formula1Model
Example
Who won the 2024 Monaco Grand Prix?
> Max Verstappen won the Bahrain Grand Prix driving for Red Bull Racing Honda RBPT.
π License
This model is released under the Apache 2.0 license. You are free to use, modify, and distribute it with proper attribution.
- Downloads last month
- 51
8-bit
Model tree for zayedansari/Formula1Model
Base model
meta-llama/Meta-Llama-3-8B