Edit model card

Public version of our 3b-model trained to perform 9 specific task in english and french with high accuracy. The benchmark bellow is done with our evaluation pipeline.

Model Fine-tune from meta-llama/Llama-3.2-3B-Instruct with a specific LoRA adapter.

+--------------------------------------------------+---------+------------+------------+
| llama3b_lora_long_llama3b_lora_long_filtered_310 | Overall | Team score | loads fail |
+--------------------------------------------------+---------+------------+------------+
|               answer_reformulation               |   0.73  |    0.68    |    0.95    |
|               query_reformulation                |   0.83  |    1.0     |    142     |
|                  summarization                   |   0.92  |    ---     |     23     |
|                keyword_extraction                |   0.77  |    ---     |     65     |
|                fill_in_generation                |   0.84  |    ---     |     14     |
|                  keyword_update                  |   0.64  |    0.8     |     12     |
|                       gqa                        |   0.79  |    0.64    |     15     |
|                    true_false                    |   0.76  |    ---     |    170     |
|                       mcq                        |   0.85  |    ---     |     41     |
|                      Total                       |   0.79  |    ---     |   482.95   |
+--------------------------------------------------+---------+------------+------------+
Downloads last month
12
Safetensors
Model size
3.21B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.