Uploaded model

  • Developed by: zayedansari
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Formula1Model 🏎️

An expert Formula 1 assistant fine-tuned on the 2024 Formula 1 Championship dataset (vibingshu/2024_formula1_championship_dataset).

This model was fine-tuned using Unsloth and exported in 8-bit (Q8_0) format for efficient local inference with Ollama.


πŸ”§ Model Details

  • Base Model: LLaMA 3.2 (fine-tuned with Unsloth)
  • Dataset: 2024 F1 results, drivers, constructors, and races
  • Format: GGUF (Q8_0)
  • Task: Question answering & expert analysis on Formula 1
  • Use Case: F1 trivia, race insights, driver/team history, strategy-style Q&A

πŸ“Š Training

Hardware: Google Colab (T4 / A100, depending on availability) Tools Used: Unsloth, Hugging Face datasets, LoRA adapters Precision: 8-bit (Q8_0) for efficient inference


πŸš€ Usage

ollama pull zayedansari/Formula1Model

ollama run zayedansari/Formula1Model


Example

Who won the 2024 Monaco Grand Prix?

> Max Verstappen won the Bahrain Grand Prix driving for Red Bull Racing Honda RBPT.

πŸ“œ License

This model is released under the Apache 2.0 license. You are free to use, modify, and distribute it with proper attribution.

Downloads last month
51
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for zayedansari/Formula1Model

Quantized
(860)
this model