Finetuned Llama-3.2-3B with MBZUAI-Bactrian-X Indonesian datasets only
Copal_id benchmark (before quantized into gguf):
copal_id_standard = Formal Language
copal_id_standard_multishots | Result |
---|---|
0-shot | 56 |
10-shots | 58 |
25-shots | 57 |
copal_id_colloquial = Informal Language and local nuances
copal_id_colloquial_multishots | Result |
---|---|
0-shot | 54 |
10-shots | 54 |
25-shots | 52 |
Using Unsloth.ai for finetuning
- Downloads last month
- 47
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for shiningdota/Llama-3.2-3B_Instruct_Indonesian_gguf-test
Base model
meta-llama/Llama-3.2-3B