vllm (pretrained=/root/autodl-tmp/Mistral-Nemo-Prism-12B,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.704 ± 0.0289
strict-match 5 exact_match ↑ 0.700 ± 0.0290

vllm (pretrained=/root/autodl-tmp/output87,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.7 ± 0.029
strict-match 5 exact_match ↑ 0.7 ± 0.029
Downloads last month
8
Safetensors
Model size
12.2B params
Tensor type
BF16
·
I8
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for noneUsername/Mistral-Nemo-Prism-12B-W8A8-Dynamic-Per-Token