YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
This model is a fine-tuned version of LLaMA 3.2-3B, trained on a carefully curated dataset of 500 samples selected using Facility Location (FL) optimization. The dataset was refined from a larger corpus through representative sample selection, ensuring that the most informative and diverse data points were retained while redundant and uninformative samples were removed.
Fine-tuning was conducted to improve task-specific performance while significantly reducing training cost and data inefficiencies. By leveraging FL-based data selection, we ensured that the final dataset maintained high coverage and diversity while requiring only 5% of the original dataset size.
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.