Finetuned Llama-3.2-3B with MBZUAI-Bactrian-X Indonesian datasets only

Copal_id benchmark (before quantized into gguf):

copal_id_standard = Formal Language

copal_id_standard_multishots Result
0-shot 56
10-shots 58
25-shots 57

copal_id_colloquial = Informal Language and local nuances

copal_id_colloquial_multishots Result
0-shot 54
10-shots 54
25-shots 52

Using Unsloth.ai for finetuning

Downloads last month
47
GGUF
Model size
3.21B params
Architecture
llama
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shiningdota/Llama-3.2-3B_Instruct_Indonesian_gguf-test

Quantized
(60)
this model

Dataset used to train shiningdota/Llama-3.2-3B_Instruct_Indonesian_gguf-test