datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.76M
| likes
int64 0
6.62k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
47
⌀ | createdAt
unknown | card
stringlengths 15
1M
|
---|---|---|---|---|---|---|---|---|
open-llm-leaderboard/sthenno-com__miscii-14b-1225-details | open-llm-leaderboard | "2024-12-25T20:25:10Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:22:11Z" | ---
pretty_name: Evaluation run of sthenno-com/miscii-14b-1225
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [sthenno-com/miscii-14b-1225](https://huggingface.co/sthenno-com/miscii-14b-1225)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/sthenno-com__miscii-14b-1225-details\"\
,\n\tname=\"sthenno-com__miscii-14b-1225__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-25T20-22-10.882149](https://huggingface.co/datasets/open-llm-leaderboard/sthenno-com__miscii-14b-1225-details/blob/main/sthenno-com__miscii-14b-1225/results_2024-12-25T20-22-10.882149.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_strict_acc,none\": 0.8177458033573142,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"inst_level_loose_acc,none\"\
: 0.8345323741007195,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"acc_norm,none\": 0.5912569723699572,\n \"acc_norm_stderr,none\"\
: 0.0051334531160445035,\n \"exact_match,none\": 0.3157099697885196,\n\
\ \"exact_match_stderr,none\": 0.01209173512474148,\n \"prompt_level_strict_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_strict_acc_stderr,none\": 0.018434587800223168,\n\
\ \"prompt_level_loose_acc,none\": 0.7763401109057301,\n \"\
prompt_level_loose_acc_stderr,none\": 0.017931771054658346,\n \"acc,none\"\
: 0.5271775265957447,\n \"acc_stderr,none\": 0.004551731502026407,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6559625065092866,\n \"acc_norm_stderr,none\"\
: 0.005789509618590658,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.908,\n\
\ \"acc_norm_stderr,none\": 0.01831627537942964\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6256684491978609,\n \"acc_norm_stderr,none\"\
: 0.0354849234134303\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.664,\n\
\ \"acc_norm_stderr,none\": 0.029933259094191533\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.704,\n \"acc_norm_stderr,none\":\
\ 0.028928939388379697\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.76,\n \
\ \"acc_norm_stderr,none\": 0.027065293652238982\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\":\
\ 0.03056207062099311\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.64,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.92,\n \"acc_norm_stderr,none\": 0.017192507941463025\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\":\
\ 0.02806876238252672\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6575342465753424,\n \"acc_norm_stderr,none\": 0.03940794258783979\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.824,\n \
\ \"acc_norm_stderr,none\": 0.024133497525457123\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.7808988764044944,\n \"acc_norm_stderr,none\"\
: 0.031090883837921395\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.768,\n \"acc_norm_stderr,none\": 0.026750070374865202\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.892,\n\
\ \"acc_norm_stderr,none\": 0.019669559381568776\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.356,\n \"acc_norm_stderr,none\":\
\ 0.0303436806571532\n },\n \"leaderboard_bbh_web_of_lies\": {\n \
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3775167785234899,\n\
\ \"acc_norm_stderr,none\": 0.014056847074819325,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.36363636363636365,\n \"acc_norm_stderr,none\": 0.03427308652999934\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.38461538461538464,\n\
\ \"acc_norm_stderr,none\": 0.02083955266087989\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.375,\n \"acc_norm_stderr,none\":\
\ 0.02289822829522849\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_strict_acc_stderr,none\": 0.018434587800223168,\n\
\ \"inst_level_strict_acc,none\": 0.8177458033573142,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7763401109057301,\n \"prompt_level_loose_acc_stderr,none\": 0.017931771054658346,\n\
\ \"inst_level_loose_acc,none\": 0.8345323741007195,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.3157099697885196,\n \"exact_match_stderr,none\"\
: 0.01209173512474148,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.5407166123778502,\n\
\ \"exact_match_stderr,none\": 0.028488167333608636\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2845528455284553,\n \"exact_match_stderr,none\": 0.04084983733239221\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.22727272727272727,\n\
\ \"exact_match_stderr,none\": 0.036614333604107194\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.15,\n \"exact_match_stderr,none\": 0.021377306830183917\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \"\
\ - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.2597402597402597,\n\
\ \"exact_match_stderr,none\": 0.03544997923156578\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.43523316062176165,\n \"exact_match_stderr,none\"\
: 0.03578038165008584\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.15555555555555556,\n \"exact_match_stderr,none\"\
: 0.03130948364878313\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.5271775265957447,\n\
\ \"acc_stderr,none\": 0.004551731502026407\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4351851851851852,\n \"acc_norm_stderr,none\"\
: 0.017388753053079846,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.588,\n\
\ \"acc_norm_stderr,none\": 0.031191596026022818\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.26171875,\n \"acc_norm_stderr,none\"\
: 0.027526959754524398\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_strict_acc,none\"\
: 0.8177458033573142,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_loose_acc,none\": 0.8345323741007195,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"acc_norm,none\": 0.5912569723699572,\n \"acc_norm_stderr,none\"\
: 0.0051334531160445035,\n \"exact_match,none\": 0.3157099697885196,\n \
\ \"exact_match_stderr,none\": 0.01209173512474148,\n \"prompt_level_strict_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_strict_acc_stderr,none\": 0.018434587800223168,\n\
\ \"prompt_level_loose_acc,none\": 0.7763401109057301,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.017931771054658346,\n \"acc,none\": 0.5271775265957447,\n \"acc_stderr,none\"\
: 0.004551731502026407,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6559625065092866,\n \"acc_norm_stderr,none\"\
: 0.005789509618590658,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.908,\n \"acc_norm_stderr,none\": 0.01831627537942964\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6256684491978609,\n \"acc_norm_stderr,none\"\
: 0.0354849234134303\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.704,\n \"acc_norm_stderr,none\": 0.028928939388379697\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.64,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.92,\n \"acc_norm_stderr,none\": 0.017192507941463025\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6575342465753424,\n\
\ \"acc_norm_stderr,none\": 0.03940794258783979\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.816,\n \"acc_norm_stderr,none\": 0.02455581299422255\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.824,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.7808988764044944,\n \"acc_norm_stderr,none\"\
: 0.031090883837921395\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.768,\n \"acc_norm_stderr,none\": 0.026750070374865202\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.892,\n \"acc_norm_stderr,none\": 0.019669559381568776\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.356,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3775167785234899,\n\
\ \"acc_norm_stderr,none\": 0.014056847074819325,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.36363636363636365,\n\
\ \"acc_norm_stderr,none\": 0.03427308652999934\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.38461538461538464,\n \"acc_norm_stderr,none\": 0.02083955266087989\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.375,\n \"acc_norm_stderr,none\": 0.02289822829522849\n\
\ },\n \"leaderboard_ifeval\": {\n \"alias\": \" - leaderboard_ifeval\"\
,\n \"prompt_level_strict_acc,none\": 0.7578558225508318,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.018434587800223168,\n \"inst_level_strict_acc,none\": 0.8177458033573142,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7763401109057301,\n \"prompt_level_loose_acc_stderr,none\": 0.017931771054658346,\n\
\ \"inst_level_loose_acc,none\": 0.8345323741007195,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.3157099697885196,\n \"exact_match_stderr,none\": 0.01209173512474148,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.5407166123778502,\n \"exact_match_stderr,none\": 0.028488167333608636\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2845528455284553,\n \"exact_match_stderr,none\": 0.04084983733239221\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.22727272727272727,\n \"exact_match_stderr,none\"\
: 0.036614333604107194\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.15,\n \"exact_match_stderr,none\": 0.021377306830183917\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.2597402597402597,\n \"exact_match_stderr,none\"\
: 0.03544997923156578\n },\n \"leaderboard_math_prealgebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.43523316062176165,\n \"exact_match_stderr,none\": 0.03578038165008584\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.15555555555555556,\n\
\ \"exact_match_stderr,none\": 0.03130948364878313\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.5271775265957447,\n\
\ \"acc_stderr,none\": 0.004551731502026407\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4351851851851852,\n \"acc_norm_stderr,none\"\
: 0.017388753053079846,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.26171875,\n\
\ \"acc_norm_stderr,none\": 0.027526959754524398\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n }\n}\n```"
repo_url: https://huggingface.co/sthenno-com/miscii-14b-1225
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_ifeval
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_ifeval_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T20-22-10.882149.jsonl'
- config_name: sthenno-com__miscii-14b-1225__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T20_22_10.882149
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T20-22-10.882149.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T20-22-10.882149.jsonl'
---
# Dataset Card for Evaluation run of sthenno-com/miscii-14b-1225
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [sthenno-com/miscii-14b-1225](https://huggingface.co/sthenno-com/miscii-14b-1225)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/sthenno-com__miscii-14b-1225-details",
name="sthenno-com__miscii-14b-1225__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T20-22-10.882149](https://huggingface.co/datasets/open-llm-leaderboard/sthenno-com__miscii-14b-1225-details/blob/main/sthenno-com__miscii-14b-1225/results_2024-12-25T20-22-10.882149.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_strict_acc,none": 0.8177458033573142,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.8345323741007195,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.5912569723699572,
"acc_norm_stderr,none": 0.0051334531160445035,
"exact_match,none": 0.3157099697885196,
"exact_match_stderr,none": 0.01209173512474148,
"prompt_level_strict_acc,none": 0.7578558225508318,
"prompt_level_strict_acc_stderr,none": 0.018434587800223168,
"prompt_level_loose_acc,none": 0.7763401109057301,
"prompt_level_loose_acc_stderr,none": 0.017931771054658346,
"acc,none": 0.5271775265957447,
"acc_stderr,none": 0.004551731502026407,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6559625065092866,
"acc_norm_stderr,none": 0.005789509618590658,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.908,
"acc_norm_stderr,none": 0.01831627537942964
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6256684491978609,
"acc_norm_stderr,none": 0.0354849234134303
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.704,
"acc_norm_stderr,none": 0.028928939388379697
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.92,
"acc_norm_stderr,none": 0.017192507941463025
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6575342465753424,
"acc_norm_stderr,none": 0.03940794258783979
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7808988764044944,
"acc_norm_stderr,none": 0.031090883837921395
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.768,
"acc_norm_stderr,none": 0.026750070374865202
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.892,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3775167785234899,
"acc_norm_stderr,none": 0.014056847074819325,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.36363636363636365,
"acc_norm_stderr,none": 0.03427308652999934
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.38461538461538464,
"acc_norm_stderr,none": 0.02083955266087989
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.375,
"acc_norm_stderr,none": 0.02289822829522849
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7578558225508318,
"prompt_level_strict_acc_stderr,none": 0.018434587800223168,
"inst_level_strict_acc,none": 0.8177458033573142,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7763401109057301,
"prompt_level_loose_acc_stderr,none": 0.017931771054658346,
"inst_level_loose_acc,none": 0.8345323741007195,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.3157099697885196,
"exact_match_stderr,none": 0.01209173512474148,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.5407166123778502,
"exact_match_stderr,none": 0.028488167333608636
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2845528455284553,
"exact_match_stderr,none": 0.04084983733239221
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.22727272727272727,
"exact_match_stderr,none": 0.036614333604107194
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.15,
"exact_match_stderr,none": 0.021377306830183917
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.2597402597402597,
"exact_match_stderr,none": 0.03544997923156578
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.43523316062176165,
"exact_match_stderr,none": 0.03578038165008584
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.15555555555555556,
"exact_match_stderr,none": 0.03130948364878313
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.5271775265957447,
"acc_stderr,none": 0.004551731502026407
},
"leaderboard_musr": {
"acc_norm,none": 0.4351851851851852,
"acc_norm_stderr,none": 0.017388753053079846,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.26171875,
"acc_norm_stderr,none": 0.027526959754524398
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
}
},
"leaderboard": {
"inst_level_strict_acc,none": 0.8177458033573142,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.8345323741007195,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.5912569723699572,
"acc_norm_stderr,none": 0.0051334531160445035,
"exact_match,none": 0.3157099697885196,
"exact_match_stderr,none": 0.01209173512474148,
"prompt_level_strict_acc,none": 0.7578558225508318,
"prompt_level_strict_acc_stderr,none": 0.018434587800223168,
"prompt_level_loose_acc,none": 0.7763401109057301,
"prompt_level_loose_acc_stderr,none": 0.017931771054658346,
"acc,none": 0.5271775265957447,
"acc_stderr,none": 0.004551731502026407,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6559625065092866,
"acc_norm_stderr,none": 0.005789509618590658,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.908,
"acc_norm_stderr,none": 0.01831627537942964
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6256684491978609,
"acc_norm_stderr,none": 0.0354849234134303
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.704,
"acc_norm_stderr,none": 0.028928939388379697
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.92,
"acc_norm_stderr,none": 0.017192507941463025
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6575342465753424,
"acc_norm_stderr,none": 0.03940794258783979
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.816,
"acc_norm_stderr,none": 0.02455581299422255
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7808988764044944,
"acc_norm_stderr,none": 0.031090883837921395
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.768,
"acc_norm_stderr,none": 0.026750070374865202
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.892,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.356,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3775167785234899,
"acc_norm_stderr,none": 0.014056847074819325,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.36363636363636365,
"acc_norm_stderr,none": 0.03427308652999934
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.38461538461538464,
"acc_norm_stderr,none": 0.02083955266087989
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.375,
"acc_norm_stderr,none": 0.02289822829522849
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7578558225508318,
"prompt_level_strict_acc_stderr,none": 0.018434587800223168,
"inst_level_strict_acc,none": 0.8177458033573142,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7763401109057301,
"prompt_level_loose_acc_stderr,none": 0.017931771054658346,
"inst_level_loose_acc,none": 0.8345323741007195,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.3157099697885196,
"exact_match_stderr,none": 0.01209173512474148,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.5407166123778502,
"exact_match_stderr,none": 0.028488167333608636
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2845528455284553,
"exact_match_stderr,none": 0.04084983733239221
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.22727272727272727,
"exact_match_stderr,none": 0.036614333604107194
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.15,
"exact_match_stderr,none": 0.021377306830183917
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.2597402597402597,
"exact_match_stderr,none": 0.03544997923156578
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.43523316062176165,
"exact_match_stderr,none": 0.03578038165008584
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.15555555555555556,
"exact_match_stderr,none": 0.03130948364878313
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.5271775265957447,
"acc_stderr,none": 0.004551731502026407
},
"leaderboard_musr": {
"acc_norm,none": 0.4351851851851852,
"acc_norm_stderr,none": 0.017388753053079846,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.26171875,
"acc_norm_stderr,none": 0.027526959754524398
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
grudgie/amazon-reviews-subset | grudgie | "2024-12-25T20:33:37Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:32:43Z" | ---
dataset_info:
features:
- name: rating
dtype: float64
- name: title
dtype: string
- name: text
dtype: string
- name: images
list:
- name: attachment_type
dtype: string
- name: large_image_url
dtype: string
- name: medium_image_url
dtype: string
- name: small_image_url
dtype: string
- name: asin
dtype: string
- name: parent_asin
dtype: string
- name: user_id
dtype: string
- name: timestamp
dtype: int64
- name: helpful_vote
dtype: int64
- name: verified_purchase
dtype: bool
splits:
- name: full
num_bytes: 80811509.53473501
num_examples: 256915
download_size: 52403165
dataset_size: 80811509.53473501
configs:
- config_name: default
data_files:
- split: full
path: data/full-*
---
|
open-llm-leaderboard/zelk12__MT1-Gen5-gemma-2-9B-details | open-llm-leaderboard | "2024-12-25T20:37:47Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:34:22Z" | ---
pretty_name: Evaluation run of zelk12/MT1-Gen5-gemma-2-9B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [zelk12/MT1-Gen5-gemma-2-9B](https://huggingface.co/zelk12/MT1-Gen5-gemma-2-9B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/zelk12__MT1-Gen5-gemma-2-9B-details\"\
,\n\tname=\"zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_boolean_expressions\",\n\
\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-25T20-34-21.650551](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT1-Gen5-gemma-2-9B-details/blob/main/zelk12__MT1-Gen5-gemma-2-9B/results_2024-12-25T20-34-21.650551.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.5437799974056298,\n \"acc_norm_stderr,none\"\
: 0.00533024538506611,\n \"prompt_level_loose_acc,none\": 0.767097966728281,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.01818926607409182,\n \
\ \"exact_match,none\": 0.0324773413897281,\n \"exact_match_stderr,none\"\
: 0.004793317189961975,\n \"inst_level_strict_acc,none\": 0.8177458033573142,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"inst_level_loose_acc,none\"\
: 0.8369304556354916,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"acc,none\": 0.42220744680851063,\n \"acc_stderr,none\"\
: 0.004502959606988701,\n \"prompt_level_strict_acc,none\": 0.7412199630314233,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018846992560712525,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\":\
\ {\n \"acc_norm,none\": 0.6011109182433605,\n \"acc_norm_stderr,none\"\
: 0.006120192093942356,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.844,\n\
\ \"acc_norm_stderr,none\": 0.022995023034068682\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6203208556149733,\n \"acc_norm_stderr,none\"\
: 0.03558443628801667\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.604,\n\
\ \"acc_norm_stderr,none\": 0.030993197854577898\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.696,\n \
\ \"acc_norm_stderr,none\": 0.029150213374159652\n },\n \"\
leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\": \" \
\ - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.824,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\":\
\ 0.029933259094191533\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6027397260273972,\n \"acc_norm_stderr,none\": 0.040636704038880346\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.796,\n \
\ \"acc_norm_stderr,none\": 0.025537121574548162\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6348314606741573,\n \"acc_norm_stderr,none\"\
: 0.03619005678691264\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.772,\n\
\ \"acc_norm_stderr,none\": 0.026587432487268498\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\": 0.02915021337415965\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\":\
\ 0.02936106757521985\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.516,\n\
\ \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3464765100671141,\n\
\ \"acc_norm_stderr,none\": 0.013799354494924254,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.35353535353535354,\n \"acc_norm_stderr,none\": 0.03406086723547151\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.34798534798534797,\n\
\ \"acc_norm_stderr,none\": 0.02040379284640522\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.34151785714285715,\n \"acc_norm_stderr,none\"\
: 0.022429776589214533\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7412199630314233,\n \"prompt_level_strict_acc_stderr,none\": 0.018846992560712525,\n\
\ \"inst_level_strict_acc,none\": 0.8177458033573142,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.767097966728281,\n \"prompt_level_loose_acc_stderr,none\": 0.01818926607409182,\n\
\ \"inst_level_loose_acc,none\": 0.8369304556354916,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.0324773413897281,\n \"exact_match_stderr,none\"\
: 0.004793317189961975,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.06514657980456026,\n\
\ \"exact_match_stderr,none\": 0.014107720843558174\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.016260162601626018,\n \"exact_match_stderr,none\": 0.011450452676925654\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.025974025974025976,\n \"exact_match_stderr,none\": 0.012859058999697068\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\": 0.08808290155440414,\n\
\ \"exact_match_stderr,none\": 0.020453746601601056\n },\n \
\ \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" - leaderboard_math_precalculus_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \"\
\ - leaderboard_mmlu_pro\",\n \"acc,none\": 0.42220744680851063,\n \
\ \"acc_stderr,none\": 0.004502959606988701\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.41798941798941797,\n \"acc_norm_stderr,none\"\
: 0.01748438652663388,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.552,\n\
\ \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.2734375,\n \"acc_norm_stderr,none\"\
: 0.027912287939448926\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.5437799974056298,\n\
\ \"acc_norm_stderr,none\": 0.00533024538506611,\n \"prompt_level_loose_acc,none\"\
: 0.767097966728281,\n \"prompt_level_loose_acc_stderr,none\": 0.01818926607409182,\n\
\ \"exact_match,none\": 0.0324773413897281,\n \"exact_match_stderr,none\"\
: 0.004793317189961975,\n \"inst_level_strict_acc,none\": 0.8177458033573142,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"inst_level_loose_acc,none\"\
: 0.8369304556354916,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"acc,none\": 0.42220744680851063,\n \"acc_stderr,none\": 0.004502959606988701,\n\
\ \"prompt_level_strict_acc,none\": 0.7412199630314233,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.018846992560712525,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6011109182433605,\n \"acc_norm_stderr,none\"\
: 0.006120192093942356,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.844,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6203208556149733,\n \"acc_norm_stderr,none\"\
: 0.03558443628801667\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.696,\n \"acc_norm_stderr,none\": 0.029150213374159652\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.824,\n \"acc_norm_stderr,none\": 0.024133497525457123\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6027397260273972,\n\
\ \"acc_norm_stderr,none\": 0.040636704038880346\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.796,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6348314606741573,\n \"acc_norm_stderr,none\"\
: 0.03619005678691264\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.772,\n \"acc_norm_stderr,none\": 0.026587432487268498\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\": 0.02915021337415965\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.516,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3464765100671141,\n\
\ \"acc_norm_stderr,none\": 0.013799354494924254,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.35353535353535354,\n\
\ \"acc_norm_stderr,none\": 0.03406086723547151\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.34798534798534797,\n \"acc_norm_stderr,none\": 0.02040379284640522\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.34151785714285715,\n \"acc_norm_stderr,none\"\
: 0.022429776589214533\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7412199630314233,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018846992560712525,\n \
\ \"inst_level_strict_acc,none\": 0.8177458033573142,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.767097966728281,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01818926607409182,\n \"inst_level_loose_acc,none\"\
: 0.8369304556354916,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0324773413897281,\n\
\ \"exact_match_stderr,none\": 0.004793317189961975,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.06514657980456026,\n \"exact_match_stderr,none\": 0.014107720843558174\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.016260162601626018,\n \"exact_match_stderr,none\": 0.011450452676925654\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.025974025974025976,\n \"exact_match_stderr,none\": 0.012859058999697068\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.08808290155440414,\n \"exact_match_stderr,none\"\
: 0.020453746601601056\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.42220744680851063,\n\
\ \"acc_stderr,none\": 0.004502959606988701\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.41798941798941797,\n \"acc_norm_stderr,none\"\
: 0.01748438652663388,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.2734375,\n\
\ \"acc_norm_stderr,none\": 0.027912287939448926\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n }\n}\n```"
repo_url: https://huggingface.co/zelk12/MT1-Gen5-gemma-2-9B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_ifeval
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_ifeval_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T20-34-21.650551.jsonl'
- config_name: zelk12__MT1-Gen5-gemma-2-9B__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T20_34_21.650551
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T20-34-21.650551.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T20-34-21.650551.jsonl'
---
# Dataset Card for Evaluation run of zelk12/MT1-Gen5-gemma-2-9B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [zelk12/MT1-Gen5-gemma-2-9B](https://huggingface.co/zelk12/MT1-Gen5-gemma-2-9B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/zelk12__MT1-Gen5-gemma-2-9B-details",
name="zelk12__MT1-Gen5-gemma-2-9B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T20-34-21.650551](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT1-Gen5-gemma-2-9B-details/blob/main/zelk12__MT1-Gen5-gemma-2-9B/results_2024-12-25T20-34-21.650551.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.5437799974056298,
"acc_norm_stderr,none": 0.00533024538506611,
"prompt_level_loose_acc,none": 0.767097966728281,
"prompt_level_loose_acc_stderr,none": 0.01818926607409182,
"exact_match,none": 0.0324773413897281,
"exact_match_stderr,none": 0.004793317189961975,
"inst_level_strict_acc,none": 0.8177458033573142,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.8369304556354916,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.42220744680851063,
"acc_stderr,none": 0.004502959606988701,
"prompt_level_strict_acc,none": 0.7412199630314233,
"prompt_level_strict_acc_stderr,none": 0.018846992560712525,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6011109182433605,
"acc_norm_stderr,none": 0.006120192093942356,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6203208556149733,
"acc_norm_stderr,none": 0.03558443628801667
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.696,
"acc_norm_stderr,none": 0.029150213374159652
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6027397260273972,
"acc_norm_stderr,none": 0.040636704038880346
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.796,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6348314606741573,
"acc_norm_stderr,none": 0.03619005678691264
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.772,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3464765100671141,
"acc_norm_stderr,none": 0.013799354494924254,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35353535353535354,
"acc_norm_stderr,none": 0.03406086723547151
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.34798534798534797,
"acc_norm_stderr,none": 0.02040379284640522
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.34151785714285715,
"acc_norm_stderr,none": 0.022429776589214533
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7412199630314233,
"prompt_level_strict_acc_stderr,none": 0.018846992560712525,
"inst_level_strict_acc,none": 0.8177458033573142,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.767097966728281,
"prompt_level_loose_acc_stderr,none": 0.01818926607409182,
"inst_level_loose_acc,none": 0.8369304556354916,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0324773413897281,
"exact_match_stderr,none": 0.004793317189961975,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.06514657980456026,
"exact_match_stderr,none": 0.014107720843558174
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.016260162601626018,
"exact_match_stderr,none": 0.011450452676925654
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.025974025974025976,
"exact_match_stderr,none": 0.012859058999697068
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.08808290155440414,
"exact_match_stderr,none": 0.020453746601601056
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.42220744680851063,
"acc_stderr,none": 0.004502959606988701
},
"leaderboard_musr": {
"acc_norm,none": 0.41798941798941797,
"acc_norm_stderr,none": 0.01748438652663388,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.2734375,
"acc_norm_stderr,none": 0.027912287939448926
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
},
"leaderboard": {
"acc_norm,none": 0.5437799974056298,
"acc_norm_stderr,none": 0.00533024538506611,
"prompt_level_loose_acc,none": 0.767097966728281,
"prompt_level_loose_acc_stderr,none": 0.01818926607409182,
"exact_match,none": 0.0324773413897281,
"exact_match_stderr,none": 0.004793317189961975,
"inst_level_strict_acc,none": 0.8177458033573142,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.8369304556354916,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.42220744680851063,
"acc_stderr,none": 0.004502959606988701,
"prompt_level_strict_acc,none": 0.7412199630314233,
"prompt_level_strict_acc_stderr,none": 0.018846992560712525,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6011109182433605,
"acc_norm_stderr,none": 0.006120192093942356,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6203208556149733,
"acc_norm_stderr,none": 0.03558443628801667
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.696,
"acc_norm_stderr,none": 0.029150213374159652
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6027397260273972,
"acc_norm_stderr,none": 0.040636704038880346
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.796,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6348314606741573,
"acc_norm_stderr,none": 0.03619005678691264
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.772,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3464765100671141,
"acc_norm_stderr,none": 0.013799354494924254,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35353535353535354,
"acc_norm_stderr,none": 0.03406086723547151
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.34798534798534797,
"acc_norm_stderr,none": 0.02040379284640522
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.34151785714285715,
"acc_norm_stderr,none": 0.022429776589214533
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7412199630314233,
"prompt_level_strict_acc_stderr,none": 0.018846992560712525,
"inst_level_strict_acc,none": 0.8177458033573142,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.767097966728281,
"prompt_level_loose_acc_stderr,none": 0.01818926607409182,
"inst_level_loose_acc,none": 0.8369304556354916,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0324773413897281,
"exact_match_stderr,none": 0.004793317189961975,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.06514657980456026,
"exact_match_stderr,none": 0.014107720843558174
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.016260162601626018,
"exact_match_stderr,none": 0.011450452676925654
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.025974025974025976,
"exact_match_stderr,none": 0.012859058999697068
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.08808290155440414,
"exact_match_stderr,none": 0.020453746601601056
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.42220744680851063,
"acc_stderr,none": 0.004502959606988701
},
"leaderboard_musr": {
"acc_norm,none": 0.41798941798941797,
"acc_norm_stderr,none": 0.01748438652663388,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.2734375,
"acc_norm_stderr,none": 0.027912287939448926
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
extraordinarylab/infinity-instruct-inverse | extraordinarylab | "2024-12-25T20:38:05Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:34:42Z" | ---
dataset_info:
config_name: 660k
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: label
struct:
- name: ability_en
sequence: string
- name: ability_zh
sequence: string
- name: cate_ability_en
sequence: string
- name: cate_ability_zh
sequence: string
- name: langdetect
dtype: string
- name: source
dtype: string
- name: inverse_conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 4643256198
num_examples: 659808
download_size: 2457257628
dataset_size: 4643256198
configs:
- config_name: 660k
data_files:
- split: train
path: 660k/train-*
---
|
dgambettavuw/D_gen10_run2_llama2-7b_sciabs_doc1000_real32_synt96_vuw | dgambettavuw | "2024-12-25T20:42:40Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:42:36Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 636107
num_examples: 1000
download_size: 257601
dataset_size: 636107
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
spiralworks/lg_domain_gen_2015_2020_test_2 | spiralworks | "2024-12-25T21:25:43Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:47:00Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: abstract
dtype: string
- name: year
dtype: string
- name: venue
dtype: string
- name: keywords
sequence: string
- name: pdf_url
dtype: string
- name: forum_url
dtype: string
- name: forum_raw_text
dtype: string
- name: reviews_raw_text
dtype: string
- name: average_rating
dtype: float64
- name: average_confidence
dtype: float64
- name: reviews
dtype: string
splits:
- name: train
num_bytes: 40140579
num_examples: 1019
download_size: 19796051
dataset_size: 40140579
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
violetxi/MATH-500_L4_best_first_N128_B4_D15_T0.0001_41-128 | violetxi | "2024-12-26T00:08:23Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:47:13Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: search_trace_with_values
dtype: string
- name: search_method
dtype: string
- name: ground_truth
dtype: string
- name: search_input_tokens
dtype: int64
- name: search_output_tokens
dtype: int64
- name: solution_input_tokens
dtype: int64
- name: solution_output_tokens
dtype: int64
splits:
- name: train
num_bytes: 40314
num_examples: 10
download_size: 30594
dataset_size: 40314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ParsBench/PersianSyntheticEmotions | ParsBench | "2024-12-25T20:58:40Z" | 0 | 0 | [
"language:fa",
"license:apache-2.0",
"size_categories:1K<n<10K",
"region:us",
"persian",
"emotion",
"synthetic",
"text-classification"
] | null | "2024-12-25T20:49:21Z" | ---
license: apache-2.0
tags:
- persian
- emotion
- synthetic
- text-classification
language:
- fa
size_categories:
- 1K<n<10K
---
# PersianSyntheticEmotions Dataset
This dataset contains 8,751 Persian text records synthetically generated using the GPT-4o model, labeled with Ekman's six basic emotions. The dataset is sourced from [PersianSyntheticData](https://github.com/ParsBench/PersianSyntheticData).
## Dataset Description
### Dataset Summary
The PersianSyntheticEmotions dataset is a collection of Persian texts labeled with six emotion classes based on Ekman's basic emotions theory. The data is synthetically generated, making it useful for training emotion classification models for Persian language tasks.
### Supported Tasks
- Text Classification
- Emotion Recognition
- Sentiment Analysis
### Languages
- Persian (fa)
### Loading the Dataset
```python
from datasets import load_dataset
dataset = load_dataset("ParsBench/PersianSyntheticEmotions")
```
### Dataset Structure
The dataset contains the following emotion classes:
- Joy (شادی)
- Sadness (غم)
- Anger (خشم)
- Fear (ترس)
- Disgust (تنفر)
- Surprise (تعجب)
Total number of examples: 10,000
### Data Fields
- `text`: Persian text content
- `label`: Emotion class label (one of the six Ekman emotions)
### Data Splits
The data is provided in JSONL format in the `data` directory.
### Data Samples
```json
{
"text": "دیروز در جشنواره محلی روستا شرکت کردم و از دیدن رقصهای محلی و غذاهای سنتی بسیار لذت بردم.",
"label": "شادی"
}
```
### Dataset Statistics
| Emotion Label | Count |
|--------------|-------|
| شادی (Joy) | 2746 |
| غم (Sadness) | 1638 |
| تعجب (Surprise) | 1282 |
| خشم (Anger) | 1011 |
| ترس (Fear) | 1124 |
| نفرت (Disgust) | 950 |
### Source Data
The dataset is derived from the [PersianSyntheticData](https://github.com/ParsBench/PersianSyntheticData) project. The texts were generated using the GPT-4 model and annotated with appropriate emotion labels.
### Considerations for Using the Data
Since this is synthetic data generated by AI:
- The quality and naturalness of the text may vary
- The data might not perfectly represent real-world emotion expressions
- It's recommended to validate the model's performance on real-world data
## Additional Information
### Dataset Curators
This dataset was curated by the ParsBench team.
### Licensing Information
This dataset is licensed under Apache 2.0 License.
### Citation Information
If you use this dataset, please cite the original work:
```bibtex
@dataset{ParsBench/PersianSyntheticEmotions,
author = {ParsBench},
title = {PersianSyntheticEmotions},
year = {2025},
url = {https://github.com/ParsBench/PersianSyntheticEmotions}
}
```
### Contributions
Thanks to the ParsBench team for creating and sharing this dataset.
|
fernandabufon/eng_to_pt_qwq32b | fernandabufon | "2024-12-25T20:49:35Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:49:33Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: translation
dtype: string
- name: Anger
dtype: int64
- name: Fear
dtype: int64
- name: Joy
dtype: int64
- name: Sadness
dtype: int64
- name: Surprise
dtype: int64
- name: prompt_style
dtype: string
- name: inference_time
dtype: float64
- name: inference_total_time
dtype: float64
- name: inference_average_time
dtype: float64
splits:
- name: train
num_bytes: 9881
num_examples: 3
download_size: 12542
dataset_size: 9881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/qingy2024__Falcon3-2x10B-MoE-Instruct-details | open-llm-leaderboard | "2024-12-25T20:54:44Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:51:46Z" | ---
pretty_name: Evaluation run of qingy2024/Falcon3-2x10B-MoE-Instruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [qingy2024/Falcon3-2x10B-MoE-Instruct](https://huggingface.co/qingy2024/Falcon3-2x10B-MoE-Instruct)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/qingy2024__Falcon3-2x10B-MoE-Instruct-details\"\
,\n\tname=\"qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-25T20-51-45.563361](https://huggingface.co/datasets/open-llm-leaderboard/qingy2024__Falcon3-2x10B-MoE-Instruct-details/blob/main/qingy2024__Falcon3-2x10B-MoE-Instruct/results_2024-12-25T20-51-45.563361.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.553508885717992,\n \"acc_norm_stderr,none\"\
: 0.005255748604320327,\n \"inst_level_loose_acc,none\": 0.8465227817745803,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.8213429256594724,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"prompt_level_strict_acc,none\": 0.7486136783733827,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.018668216152240437,\n \
\ \"acc,none\": 0.44232047872340424,\n \"acc_stderr,none\": 0.004528037433703766,\n\
\ \"prompt_level_loose_acc,none\": 0.7763401109057301,\n \"\
prompt_level_loose_acc_stderr,none\": 0.017931771054658346,\n \"exact_match,none\"\
: 0.25755287009063443,\n \"exact_match_stderr,none\": 0.011203792279195409,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6162124631140427,\n \"acc_norm_stderr,none\"\
: 0.006015865354691821,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.876,\n\
\ \"acc_norm_stderr,none\": 0.020886382258673272\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6417112299465241,\n \"acc_norm_stderr,none\"\
: 0.03515846823665025\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.716,\n\
\ \"acc_norm_stderr,none\": 0.028576958730437443\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\":\
\ 0.029933259094191533\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.7,\n \
\ \"acc_norm_stderr,none\": 0.029040893477575786\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n\
\ \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.82,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.772,\n \"acc_norm_stderr,none\": 0.026587432487268498\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\":\
\ 0.029933259094191533\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.38,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6027397260273972,\n \"acc_norm_stderr,none\": 0.040636704038880346\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.756,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.732,\n \
\ \"acc_norm_stderr,none\": 0.02806876238252672\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.612,\n \"acc_norm_stderr,none\":\
\ 0.030881038748993974\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.8033707865168539,\n \"acc_norm_stderr,none\": 0.029874139553421764\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.752,\n\
\ \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.236,\n \"acc_norm_stderr,none\": 0.026909337594953852\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\":\
\ 0.02721799546455311\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.536,\n\
\ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.33053691275167785,\n\
\ \"acc_norm_stderr,none\": 0.013638887761635785,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.35353535353535354,\n \"acc_norm_stderr,none\": 0.03406086723547151\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.326007326007326,\n\
\ \"acc_norm_stderr,none\": 0.0200790433174674\n },\n \"\
leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.32589285714285715,\n \"acc_norm_stderr,none\"\
: 0.02216910313464343\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7486136783733827,\n \"prompt_level_strict_acc_stderr,none\": 0.018668216152240437,\n\
\ \"inst_level_strict_acc,none\": 0.8213429256594724,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7763401109057301,\n \"prompt_level_loose_acc_stderr,none\": 0.017931771054658346,\n\
\ \"inst_level_loose_acc,none\": 0.8465227817745803,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.25755287009063443,\n \"exact_match_stderr,none\"\
: 0.011203792279195409,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.46254071661237783,\n\
\ \"exact_match_stderr,none\": 0.028502769163031596\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.18699186991869918,\n \"exact_match_stderr,none\": 0.03530034023230448\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.07575757575757576,\n\
\ \"exact_match_stderr,none\": 0.023119068741795586\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.07857142857142857,\n \"exact_match_stderr,none\": 0.016108721027177985\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.3246753246753247,\n\
\ \"exact_match_stderr,none\": 0.03785604468377516\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.40414507772020725,\n \"exact_match_stderr,none\"\
: 0.0354150857888402\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.11851851851851852,\n \"exact_match_stderr,none\"\
: 0.027922050250639006\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.44232047872340424,\n\
\ \"acc_stderr,none\": 0.004528037433703766\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42724867724867727,\n \"acc_norm_stderr,none\"\
: 0.017554674138374065,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.564,\n\
\ \"acc_norm_stderr,none\": 0.03142556706028136\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.2890625,\n \"acc_norm_stderr,none\"\
: 0.02838843806999465\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.553508885717992,\n\
\ \"acc_norm_stderr,none\": 0.005255748604320327,\n \"inst_level_loose_acc,none\"\
: 0.8465227817745803,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_strict_acc,none\": 0.8213429256594724,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.7486136783733827,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.018668216152240437,\n \"\
acc,none\": 0.44232047872340424,\n \"acc_stderr,none\": 0.004528037433703766,\n\
\ \"prompt_level_loose_acc,none\": 0.7763401109057301,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.017931771054658346,\n \"exact_match,none\": 0.25755287009063443,\n \
\ \"exact_match_stderr,none\": 0.011203792279195409,\n \"alias\": \"\
leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6162124631140427,\n\
\ \"acc_norm_stderr,none\": 0.006015865354691821,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.876,\n \"acc_norm_stderr,none\": 0.020886382258673272\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6417112299465241,\n \"acc_norm_stderr,none\"\
: 0.03515846823665025\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.82,\n \"acc_norm_stderr,none\": 0.02434689065029351\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.772,\n \"acc_norm_stderr,none\": 0.026587432487268498\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.38,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6027397260273972,\n\
\ \"acc_norm_stderr,none\": 0.040636704038880346\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.756,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.612,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.8033707865168539,\n \"acc_norm_stderr,none\"\
: 0.029874139553421764\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.752,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.236,\n \"acc_norm_stderr,none\": 0.026909337594953852\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.33053691275167785,\n\
\ \"acc_norm_stderr,none\": 0.013638887761635785,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.35353535353535354,\n\
\ \"acc_norm_stderr,none\": 0.03406086723547151\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.326007326007326,\n \"acc_norm_stderr,none\": 0.0200790433174674\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.32589285714285715,\n \"acc_norm_stderr,none\"\
: 0.02216910313464343\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7486136783733827,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018668216152240437,\n \
\ \"inst_level_strict_acc,none\": 0.8213429256594724,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.7763401109057301,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.017931771054658346,\n \"inst_level_loose_acc,none\"\
: 0.8465227817745803,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.25755287009063443,\n\
\ \"exact_match_stderr,none\": 0.011203792279195409,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.46254071661237783,\n \"exact_match_stderr,none\": 0.028502769163031596\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.18699186991869918,\n \"exact_match_stderr,none\": 0.03530034023230448\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.07575757575757576,\n \"exact_match_stderr,none\"\
: 0.023119068741795586\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.07857142857142857,\n \"exact_match_stderr,none\"\
: 0.016108721027177985\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.3246753246753247,\n \"exact_match_stderr,none\": 0.03785604468377516\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.40414507772020725,\n \"exact_match_stderr,none\"\
: 0.0354150857888402\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.11851851851851852,\n \"exact_match_stderr,none\": 0.027922050250639006\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.44232047872340424,\n \"acc_stderr,none\": 0.004528037433703766\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.42724867724867727,\n\
\ \"acc_norm_stderr,none\": 0.017554674138374065,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.564,\n \"acc_norm_stderr,none\": 0.03142556706028136\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.2890625,\n \"acc_norm_stderr,none\": 0.02838843806999465\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ }\n}\n```"
repo_url: https://huggingface.co/qingy2024/Falcon3-2x10B-MoE-Instruct
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_ifeval
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_ifeval_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T20-51-45.563361.jsonl'
- config_name: qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T20_51_45.563361
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T20-51-45.563361.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T20-51-45.563361.jsonl'
---
# Dataset Card for Evaluation run of qingy2024/Falcon3-2x10B-MoE-Instruct
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [qingy2024/Falcon3-2x10B-MoE-Instruct](https://huggingface.co/qingy2024/Falcon3-2x10B-MoE-Instruct)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/qingy2024__Falcon3-2x10B-MoE-Instruct-details",
name="qingy2024__Falcon3-2x10B-MoE-Instruct__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T20-51-45.563361](https://huggingface.co/datasets/open-llm-leaderboard/qingy2024__Falcon3-2x10B-MoE-Instruct-details/blob/main/qingy2024__Falcon3-2x10B-MoE-Instruct/results_2024-12-25T20-51-45.563361.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.553508885717992,
"acc_norm_stderr,none": 0.005255748604320327,
"inst_level_loose_acc,none": 0.8465227817745803,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8213429256594724,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7486136783733827,
"prompt_level_strict_acc_stderr,none": 0.018668216152240437,
"acc,none": 0.44232047872340424,
"acc_stderr,none": 0.004528037433703766,
"prompt_level_loose_acc,none": 0.7763401109057301,
"prompt_level_loose_acc_stderr,none": 0.017931771054658346,
"exact_match,none": 0.25755287009063443,
"exact_match_stderr,none": 0.011203792279195409,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6162124631140427,
"acc_norm_stderr,none": 0.006015865354691821,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.876,
"acc_norm_stderr,none": 0.020886382258673272
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6417112299465241,
"acc_norm_stderr,none": 0.03515846823665025
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.82,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.772,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.38,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6027397260273972,
"acc_norm_stderr,none": 0.040636704038880346
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.8033707865168539,
"acc_norm_stderr,none": 0.029874139553421764
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.236,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_gpqa": {
"acc_norm,none": 0.33053691275167785,
"acc_norm_stderr,none": 0.013638887761635785,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35353535353535354,
"acc_norm_stderr,none": 0.03406086723547151
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.326007326007326,
"acc_norm_stderr,none": 0.0200790433174674
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.32589285714285715,
"acc_norm_stderr,none": 0.02216910313464343
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7486136783733827,
"prompt_level_strict_acc_stderr,none": 0.018668216152240437,
"inst_level_strict_acc,none": 0.8213429256594724,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7763401109057301,
"prompt_level_loose_acc_stderr,none": 0.017931771054658346,
"inst_level_loose_acc,none": 0.8465227817745803,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.25755287009063443,
"exact_match_stderr,none": 0.011203792279195409,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.46254071661237783,
"exact_match_stderr,none": 0.028502769163031596
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.18699186991869918,
"exact_match_stderr,none": 0.03530034023230448
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.07575757575757576,
"exact_match_stderr,none": 0.023119068741795586
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.07857142857142857,
"exact_match_stderr,none": 0.016108721027177985
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.3246753246753247,
"exact_match_stderr,none": 0.03785604468377516
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.40414507772020725,
"exact_match_stderr,none": 0.0354150857888402
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.11851851851851852,
"exact_match_stderr,none": 0.027922050250639006
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.44232047872340424,
"acc_stderr,none": 0.004528037433703766
},
"leaderboard_musr": {
"acc_norm,none": 0.42724867724867727,
"acc_norm_stderr,none": 0.017554674138374065,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.2890625,
"acc_norm_stderr,none": 0.02838843806999465
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
},
"leaderboard": {
"acc_norm,none": 0.553508885717992,
"acc_norm_stderr,none": 0.005255748604320327,
"inst_level_loose_acc,none": 0.8465227817745803,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8213429256594724,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7486136783733827,
"prompt_level_strict_acc_stderr,none": 0.018668216152240437,
"acc,none": 0.44232047872340424,
"acc_stderr,none": 0.004528037433703766,
"prompt_level_loose_acc,none": 0.7763401109057301,
"prompt_level_loose_acc_stderr,none": 0.017931771054658346,
"exact_match,none": 0.25755287009063443,
"exact_match_stderr,none": 0.011203792279195409,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6162124631140427,
"acc_norm_stderr,none": 0.006015865354691821,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.876,
"acc_norm_stderr,none": 0.020886382258673272
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6417112299465241,
"acc_norm_stderr,none": 0.03515846823665025
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.82,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.772,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.38,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6027397260273972,
"acc_norm_stderr,none": 0.040636704038880346
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.8033707865168539,
"acc_norm_stderr,none": 0.029874139553421764
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.236,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_gpqa": {
"acc_norm,none": 0.33053691275167785,
"acc_norm_stderr,none": 0.013638887761635785,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.35353535353535354,
"acc_norm_stderr,none": 0.03406086723547151
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.326007326007326,
"acc_norm_stderr,none": 0.0200790433174674
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.32589285714285715,
"acc_norm_stderr,none": 0.02216910313464343
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7486136783733827,
"prompt_level_strict_acc_stderr,none": 0.018668216152240437,
"inst_level_strict_acc,none": 0.8213429256594724,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7763401109057301,
"prompt_level_loose_acc_stderr,none": 0.017931771054658346,
"inst_level_loose_acc,none": 0.8465227817745803,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.25755287009063443,
"exact_match_stderr,none": 0.011203792279195409,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.46254071661237783,
"exact_match_stderr,none": 0.028502769163031596
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.18699186991869918,
"exact_match_stderr,none": 0.03530034023230448
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.07575757575757576,
"exact_match_stderr,none": 0.023119068741795586
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.07857142857142857,
"exact_match_stderr,none": 0.016108721027177985
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.3246753246753247,
"exact_match_stderr,none": 0.03785604468377516
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.40414507772020725,
"exact_match_stderr,none": 0.0354150857888402
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.11851851851851852,
"exact_match_stderr,none": 0.027922050250639006
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.44232047872340424,
"acc_stderr,none": 0.004528037433703766
},
"leaderboard_musr": {
"acc_norm,none": 0.42724867724867727,
"acc_norm_stderr,none": 0.017554674138374065,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.2890625,
"acc_norm_stderr,none": 0.02838843806999465
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
1231czx/llama31_test_chat_format_20k_only_firstwrong_and_regular_first_corr_ep3tmp10 | 1231czx | "2024-12-25T20:58:08Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T20:58:06Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 19417822
num_examples: 5000
download_size: 7664136
dataset_size: 19417822
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama31_test_chat_format_20k_only_firstwrong_and_regular_first_corr_ep3tmp07 | 1231czx | "2024-12-25T21:04:53Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:04:51Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 18656371
num_examples: 5000
download_size: 6400469
dataset_size: 18656371
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
selfcorrexp2/llama31_first_wrong_and_10kfirst_corr_regular_norr_20k | selfcorrexp2 | "2024-12-25T21:07:11Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:07:03Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: prompt
dtype: string
- name: answers
sequence: string
- name: first_round
dtype: bool
- name: gt
dtype: string
- name: rewards
sequence: bool
- name: my_solu
sequence: string
- name: flag
dtype: bool
- name: turn
dtype: int64
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 341281757.4332316
num_examples: 20000
download_size: 154300962
dataset_size: 341281757.4332316
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettavuw/D_gen0_run2_llama2-7b_sciabs_doc1000_real96_synt32_vuw | dgambettavuw | "2024-12-25T21:07:31Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:07:27Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 814493
num_examples: 1000
download_size: 437382
dataset_size: 814493
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
selfcorrexp2/llama31_first_wrong_and_20kfirst_corr_regular_norr_20k | selfcorrexp2 | "2024-12-25T21:08:39Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:08:10Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: prompt
dtype: string
- name: answers
sequence: string
- name: first_round
dtype: bool
- name: gt
dtype: string
- name: rewards
sequence: bool
- name: my_solu
sequence: string
- name: flag
dtype: bool
- name: turn
dtype: int64
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 338968859.6649252
num_examples: 20000
download_size: 152955034
dataset_size: 338968859.6649252
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
selfcorrexp2/llama31_first_wrong_and_40kfirst_corr_regular_norr_20k | selfcorrexp2 | "2024-12-25T21:10:27Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:09:55Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: prompt
dtype: string
- name: answers
sequence: string
- name: first_round
dtype: bool
- name: gt
dtype: string
- name: rewards
sequence: bool
- name: my_solu
sequence: string
- name: flag
dtype: bool
- name: turn
dtype: int64
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 335131585.7217776
num_examples: 20000
download_size: 151093042
dataset_size: 335131585.7217776
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama31_test_chat_format_20k_only_firstwrong_and_regular_first_corr_ep3tmp0 | 1231czx | "2024-12-25T21:11:28Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:11:27Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 18517501
num_examples: 5000
download_size: 5853206
dataset_size: 18517501
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ParsBench/Persian-NoRobots | ParsBench | "2024-12-25T21:15:05Z" | 0 | 0 | [
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-12-25T21:15:05Z" | ---
license: cc-by-nc-4.0
---
|
Pattosan/my-distiset-55a6b53b | Pattosan | "2024-12-25T21:15:52Z" | 0 | 0 | [
"size_categories:n<1K",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-25T21:15:50Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': failed
'1': degrading
'2': operational
splits:
- name: train
num_bytes: 2626
num_examples: 10
download_size: 3637
dataset_size: 2626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-55a6b53b
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/Pattosan/my-distiset-55a6b53b/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/Pattosan/my-distiset-55a6b53b/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 1,
"text": "The vibrational spectral density analysis reveals a statistically significant trend towards increased amplitude modulation in the lath machine\u0027s radial direction, concurrently with a decrease in spectral power within the 100-200 Hz frequency band."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("Pattosan/my-distiset-55a6b53b", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("Pattosan/my-distiset-55a6b53b")
```
</details>
|
open-llm-leaderboard/zelk12__MT2-Gen5-gemma-2-9B-details | open-llm-leaderboard | "2024-12-25T21:19:19Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:16:20Z" | ---
pretty_name: Evaluation run of zelk12/MT2-Gen5-gemma-2-9B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [zelk12/MT2-Gen5-gemma-2-9B](https://huggingface.co/zelk12/MT2-Gen5-gemma-2-9B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/zelk12__MT2-Gen5-gemma-2-9B-details\"\
,\n\tname=\"zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_boolean_expressions\",\n\
\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-25T21-16-19.023125](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT2-Gen5-gemma-2-9B-details/blob/main/zelk12__MT2-Gen5-gemma-2-9B/results_2024-12-25T21-16-19.023125.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"exact_match,none\": 0.0634441087613293,\n \"exact_match_stderr,none\"\
: 0.006514547849699374,\n \"acc_norm,none\": 0.5484498637955636,\n \
\ \"acc_norm_stderr,none\": 0.005325974810950849,\n \"inst_level_loose_acc,none\"\
: 0.841726618705036,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"acc,none\": 0.43018617021276595,\n \"acc_stderr,none\"\
: 0.004513816000062881,\n \"inst_level_strict_acc,none\": 0.8141486810551559,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.7356746765249538,\n \"prompt_level_strict_acc_stderr,none\": 0.018976469193346637,\n\
\ \"prompt_level_loose_acc,none\": 0.7707948243992606,\n \"\
prompt_level_loose_acc_stderr,none\": 0.018087757424955286,\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.6056240236070126,\n \"acc_norm_stderr,none\": 0.0061087361491088685,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.844,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6310160427807486,\n\
\ \"acc_norm_stderr,none\": 0.03538078548260318\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.64,\n\
\ \"acc_norm_stderr,none\": 0.03041876402517494\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.628,\n \"acc_norm_stderr,none\":\
\ 0.03063032594455827\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.68,\n \
\ \"acc_norm_stderr,none\": 0.02956172495524098\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\":\
\ 0.03139181076542941\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.824,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.656,\n \"acc_norm_stderr,none\":\
\ 0.03010450339231644\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6027397260273972,\n \"acc_norm_stderr,none\": 0.040636704038880346\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.728,\n \"acc_norm_stderr,none\": 0.028200088296309975\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.784,\n \
\ \"acc_norm_stderr,none\": 0.02607865766373279\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.651685393258427,\n\
\ \"acc_norm_stderr,none\": 0.035811144737534356\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\":\
\ 0.02496069198917196\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\":\
\ 0.02915021337415965\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.52,\n \
\ \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"\
leaderboard_gpqa\": {\n \"acc_norm,none\": 0.35151006711409394,\n \
\ \"acc_norm_stderr,none\": 0.013838506941827745,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3838383838383838,\n \"acc_norm_stderr,none\": 0.03464881675016338\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3516483516483517,\n\
\ \"acc_norm_stderr,none\": 0.02045320407062836\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33705357142857145,\n \"acc_norm_stderr,none\"\
: 0.02235810146577637\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7356746765249538,\n \"prompt_level_strict_acc_stderr,none\": 0.018976469193346637,\n\
\ \"inst_level_strict_acc,none\": 0.8141486810551559,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7707948243992606,\n \"prompt_level_loose_acc_stderr,none\": 0.018087757424955286,\n\
\ \"inst_level_loose_acc,none\": 0.841726618705036,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.0634441087613293,\n \"exact_match_stderr,none\": 0.006514547849699374,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \
\ \"exact_match,none\": 0.13680781758957655,\n \"exact_match_stderr,none\"\
: 0.019644839884927132\n },\n \"leaderboard_math_counting_and_prob_hard\"\
: {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_geometry_hard\": {\n \"\
alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.022727272727272728,\n \"exact_match_stderr,none\": 0.0130210469090637\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.010714285714285714,\n \"exact_match_stderr,none\"\
: 0.006163684194761604\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.06493506493506493,\n \"exact_match_stderr,none\"\
: 0.01992116854149014\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.13471502590673576,\n \"exact_match_stderr,none\"\
: 0.02463978909770943\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.43018617021276595,\n \"acc_stderr,none\"\
: 0.004513816000062881\n },\n \"leaderboard_musr\": {\n \
\ \"acc_norm,none\": 0.42328042328042326,\n \"acc_norm_stderr,none\"\
: 0.017504600542222602,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.564,\n\
\ \"acc_norm_stderr,none\": 0.03142556706028136\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28125,\n \"acc_norm_stderr,none\"\
: 0.028155620586096754\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.428,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ }\n },\n \"leaderboard\": {\n \"exact_match,none\": 0.0634441087613293,\n\
\ \"exact_match_stderr,none\": 0.006514547849699374,\n \"acc_norm,none\"\
: 0.5484498637955636,\n \"acc_norm_stderr,none\": 0.005325974810950849,\n\
\ \"inst_level_loose_acc,none\": 0.841726618705036,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.43018617021276595,\n \"acc_stderr,none\"\
: 0.004513816000062881,\n \"inst_level_strict_acc,none\": 0.8141486810551559,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.7356746765249538,\n \"prompt_level_strict_acc_stderr,none\": 0.018976469193346637,\n\
\ \"prompt_level_loose_acc,none\": 0.7707948243992606,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.018087757424955286,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6056240236070126,\n \"acc_norm_stderr,none\"\
: 0.0061087361491088685,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.844,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6310160427807486,\n \"acc_norm_stderr,none\"\
: 0.03538078548260318\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\"\
: 0.64,\n \"acc_norm_stderr,none\": 0.03041876402517494\n },\n \"leaderboard_bbh_formal_fallacies\"\
: {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\"\
: 0.628,\n \"acc_norm_stderr,none\": 0.03063032594455827\n },\n \"\
leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.68,\n \"acc_norm_stderr,none\": 0.02956172495524098\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.824,\n \"acc_norm_stderr,none\": 0.024133497525457123\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.656,\n \"acc_norm_stderr,none\": 0.03010450339231644\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6027397260273972,\n\
\ \"acc_norm_stderr,none\": 0.040636704038880346\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.728,\n \"acc_norm_stderr,none\": 0.028200088296309975\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.651685393258427,\n \"acc_norm_stderr,none\"\
: 0.035811144737534356\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\": 0.02915021337415965\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.35151006711409394,\n\
\ \"acc_norm_stderr,none\": 0.013838506941827745,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3838383838383838,\n\
\ \"acc_norm_stderr,none\": 0.03464881675016338\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3516483516483517,\n \"acc_norm_stderr,none\": 0.02045320407062836\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33705357142857145,\n \"acc_norm_stderr,none\"\
: 0.02235810146577637\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7356746765249538,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018976469193346637,\n \
\ \"inst_level_strict_acc,none\": 0.8141486810551559,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.7707948243992606,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.018087757424955286,\n \"inst_level_loose_acc,none\"\
: 0.841726618705036,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0634441087613293,\n\
\ \"exact_match_stderr,none\": 0.006514547849699374,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.13680781758957655,\n \"exact_match_stderr,none\": 0.019644839884927132\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.022727272727272728,\n \"exact_match_stderr,none\": 0.0130210469090637\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.010714285714285714,\n \"exact_match_stderr,none\": 0.006163684194761604\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.06493506493506493,\n \"exact_match_stderr,none\"\
: 0.01992116854149014\n },\n \"leaderboard_math_prealgebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.13471502590673576,\n \"exact_match_stderr,none\": 0.02463978909770943\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.0,\n \
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\": {\n\
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.43018617021276595,\n\
\ \"acc_stderr,none\": 0.004513816000062881\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42328042328042326,\n \"acc_norm_stderr,none\"\
: 0.017504600542222602,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.564,\n \"acc_norm_stderr,none\": 0.03142556706028136\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.28125,\n \
\ \"acc_norm_stderr,none\": 0.028155620586096754\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.428,\n \"acc_norm_stderr,none\": 0.031355968923772626\n }\n}\n```"
repo_url: https://huggingface.co/zelk12/MT2-Gen5-gemma-2-9B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_ifeval
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_ifeval_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T21-16-19.023125.jsonl'
- config_name: zelk12__MT2-Gen5-gemma-2-9B__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T21_16_19.023125
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T21-16-19.023125.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T21-16-19.023125.jsonl'
---
# Dataset Card for Evaluation run of zelk12/MT2-Gen5-gemma-2-9B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [zelk12/MT2-Gen5-gemma-2-9B](https://huggingface.co/zelk12/MT2-Gen5-gemma-2-9B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/zelk12__MT2-Gen5-gemma-2-9B-details",
name="zelk12__MT2-Gen5-gemma-2-9B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T21-16-19.023125](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT2-Gen5-gemma-2-9B-details/blob/main/zelk12__MT2-Gen5-gemma-2-9B/results_2024-12-25T21-16-19.023125.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"exact_match,none": 0.0634441087613293,
"exact_match_stderr,none": 0.006514547849699374,
"acc_norm,none": 0.5484498637955636,
"acc_norm_stderr,none": 0.005325974810950849,
"inst_level_loose_acc,none": 0.841726618705036,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.43018617021276595,
"acc_stderr,none": 0.004513816000062881,
"inst_level_strict_acc,none": 0.8141486810551559,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7356746765249538,
"prompt_level_strict_acc_stderr,none": 0.018976469193346637,
"prompt_level_loose_acc,none": 0.7707948243992606,
"prompt_level_loose_acc_stderr,none": 0.018087757424955286,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6056240236070126,
"acc_norm_stderr,none": 0.0061087361491088685,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6310160427807486,
"acc_norm_stderr,none": 0.03538078548260318
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.656,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6027397260273972,
"acc_norm_stderr,none": 0.040636704038880346
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.728,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.651685393258427,
"acc_norm_stderr,none": 0.035811144737534356
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.35151006711409394,
"acc_norm_stderr,none": 0.013838506941827745,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3838383838383838,
"acc_norm_stderr,none": 0.03464881675016338
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3516483516483517,
"acc_norm_stderr,none": 0.02045320407062836
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33705357142857145,
"acc_norm_stderr,none": 0.02235810146577637
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7356746765249538,
"prompt_level_strict_acc_stderr,none": 0.018976469193346637,
"inst_level_strict_acc,none": 0.8141486810551559,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7707948243992606,
"prompt_level_loose_acc_stderr,none": 0.018087757424955286,
"inst_level_loose_acc,none": 0.841726618705036,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0634441087613293,
"exact_match_stderr,none": 0.006514547849699374,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.13680781758957655,
"exact_match_stderr,none": 0.019644839884927132
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.010714285714285714,
"exact_match_stderr,none": 0.006163684194761604
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.06493506493506493,
"exact_match_stderr,none": 0.01992116854149014
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.13471502590673576,
"exact_match_stderr,none": 0.02463978909770943
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43018617021276595,
"acc_stderr,none": 0.004513816000062881
},
"leaderboard_musr": {
"acc_norm,none": 0.42328042328042326,
"acc_norm_stderr,none": 0.017504600542222602,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28125,
"acc_norm_stderr,none": 0.028155620586096754
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.428,
"acc_norm_stderr,none": 0.031355968923772626
}
},
"leaderboard": {
"exact_match,none": 0.0634441087613293,
"exact_match_stderr,none": 0.006514547849699374,
"acc_norm,none": 0.5484498637955636,
"acc_norm_stderr,none": 0.005325974810950849,
"inst_level_loose_acc,none": 0.841726618705036,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.43018617021276595,
"acc_stderr,none": 0.004513816000062881,
"inst_level_strict_acc,none": 0.8141486810551559,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7356746765249538,
"prompt_level_strict_acc_stderr,none": 0.018976469193346637,
"prompt_level_loose_acc,none": 0.7707948243992606,
"prompt_level_loose_acc_stderr,none": 0.018087757424955286,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6056240236070126,
"acc_norm_stderr,none": 0.0061087361491088685,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6310160427807486,
"acc_norm_stderr,none": 0.03538078548260318
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.68,
"acc_norm_stderr,none": 0.02956172495524098
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.656,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6027397260273972,
"acc_norm_stderr,none": 0.040636704038880346
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.728,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.651685393258427,
"acc_norm_stderr,none": 0.035811144737534356
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.35151006711409394,
"acc_norm_stderr,none": 0.013838506941827745,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3838383838383838,
"acc_norm_stderr,none": 0.03464881675016338
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3516483516483517,
"acc_norm_stderr,none": 0.02045320407062836
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33705357142857145,
"acc_norm_stderr,none": 0.02235810146577637
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7356746765249538,
"prompt_level_strict_acc_stderr,none": 0.018976469193346637,
"inst_level_strict_acc,none": 0.8141486810551559,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7707948243992606,
"prompt_level_loose_acc_stderr,none": 0.018087757424955286,
"inst_level_loose_acc,none": 0.841726618705036,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0634441087613293,
"exact_match_stderr,none": 0.006514547849699374,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.13680781758957655,
"exact_match_stderr,none": 0.019644839884927132
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.010714285714285714,
"exact_match_stderr,none": 0.006163684194761604
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.06493506493506493,
"exact_match_stderr,none": 0.01992116854149014
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.13471502590673576,
"exact_match_stderr,none": 0.02463978909770943
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43018617021276595,
"acc_stderr,none": 0.004513816000062881
},
"leaderboard_musr": {
"acc_norm,none": 0.42328042328042326,
"acc_norm_stderr,none": 0.017504600542222602,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28125,
"acc_norm_stderr,none": 0.028155620586096754
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.428,
"acc_norm_stderr,none": 0.031355968923772626
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ljnlonoljpiljm/laion-gpt4v | ljnlonoljpiljm | "2024-12-25T21:22:17Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:18:46Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: url
dtype: string
- name: image
dtype: image
- name: caption
dtype: string
- name: detailed_caption
dtype: string
- name: tags
sequence: string
- name: dataset
dtype: string
- name: points
sequence:
- name: uuid
dtype: string
- name: x
dtype: float32
- name: y
dtype: float32
- name: label
dtype: string
- name: objects
sequence:
- name: uuid
dtype: string
- name: x_min
dtype: float32
- name: y_min
dtype: float32
- name: x_max
dtype: float32
- name: y_max
dtype: float32
- name: label
dtype: string
- name: image_width
dtype: int32
- name: image_height
dtype: int32
- name: aesthetic_score
dtype: float32
- name: sensitivity_score
dtype: float32
splits:
- name: train
num_bytes: 393196185.445
num_examples: 7535
download_size: 406163413
dataset_size: 393196185.445
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Gusarich/GenMath-0 | Gusarich | "2024-12-25T22:01:37Z" | 0 | 0 | [
"task_categories:text2text-generation",
"language:en",
"license:mit",
"size_categories:n<1K",
"region:us",
"math"
] | [
"text2text-generation"
] | "2024-12-25T21:25:05Z" | ---
license: mit
task_categories:
- text2text-generation
language:
- en
tags:
- math
size_categories:
- n<1K
---
# GenMath Dataset
## Overview
The **GenMath-0** dataset is a comprehensive benchmark designed to evaluate the mathematical reasoning capabilities of AI models. The name "GenMath" reflects two key aspects of the dataset:
- **Generated**: The problems are synthetically created using advanced AI models, ensuring a diverse and challenging set of tasks.
- **General**: The dataset spans a wide range of 86 mathematical areas, making it applicable to various domains of mathematics and problem-solving.
This is the first version of an ongoing project, with future iterations planned to include:
- Additional problems.
- Enhanced verification using even more advanced models.
- Coverage of new mathematical areas to ensure continuous improvement and relevance.
### Dataset Highlights
- **344 problems** with **numerical answers**.
- Covers **86 distinct areas of mathematics**.
- Categorized into **4 difficulty levels**: Easy, Medium, Hard, and Extreme.
- Balanced structure: Each area has exactly 4 problems, one at each difficulty level.
The GenMath dataset is designed to test models across a wide spectrum of mathematical reasoning, from foundational to advanced topics.
### Why Choose GenMath?
- **Diverse Coverage**: Problems span topics like Noncommutative Geometry, Analytic Number Theory, and many more.
- **Difficulty Gradation**: Problems are systematically categorized to reveal how models handle increasingly complex challenges.
- **Verified Answers**: Each problem has a verified numerical answer, determined by a consensus mechanism involving four independent runs of the **o1-mini** model.
---
## Dataset Structure
Each problem is represented as a JSON object with the following fields:
- **`area`**: The area of mathematics the problem belongs to (e.g., "Noncommutative Geometry").
- **`difficulty`**: The difficulty level (Easy, Medium, Hard, Extreme).
- **`problem`**: The LaTeX-formatted mathematical problem statement as a string.
- **`answer`**: The numerical answer to the problem.
### Example Entry
```json
{
"area": "Noncommutative Geometry",
"difficulty": "Medium",
"problem": "Consider a noncommutative two-dimensional torus \( \mathcal{A}_\theta \) with parameter \( \theta = \frac{1}{3} \). Let \( U \) and \( V \) be unitary operators satisfying the relation \( UV = e^{2\pi i \theta} VU \). Define the operator \( X = U + V + U^* + V^* \). Compute the trace of the operator \( X^2 \) in the GNS representation corresponding to the canonical trace state on \( \mathcal{A}_\theta \). What is the value of this trace?",
"answer": 4.0
}
```
---
## Usage
The dataset can be loaded with the Hugging Face `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("Gusarich/GenMath-0")
print(dataset["train"][0])
```
The dataset is stored in a single JSON file and can also be accessed directly.
---
## Licensing
This dataset is released under the [MIT License](LICENSE). You are free to use, modify, and distribute it, provided proper credit is given.
---
## Citation
If you use this dataset, please cite it as:
```
@dataset{sedov_genmath0,
title={GenMath-0 Dataset},
author={Daniil Sedov},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/Gusarich/GenMath-0}
}
```
|
barber82/criminaltest | barber82 | "2024-12-25T21:31:06Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-12-25T21:30:00Z" | ---
license: mit
---
|
Mah9sh/Persian_Relationship_Conversations | Mah9sh | "2024-12-25T21:34:36Z" | 0 | 0 | [
"license:cc-by-sa-4.0",
"region:us"
] | null | "2024-12-25T21:34:36Z" | ---
license: cc-by-sa-4.0
---
|
spiralworks/lg_domain_gen_2015_2020_test_3 | spiralworks | "2024-12-25T22:01:16Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:35:47Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: abstract
dtype: string
- name: year
dtype: string
- name: venue
dtype: string
- name: keywords
sequence: string
- name: pdf_url
dtype: string
- name: forum_url
dtype: string
- name: forum_raw_text
dtype: string
- name: reviews_raw_text
dtype: string
- name: average_rating
dtype: float64
- name: average_confidence
dtype: float64
- name: reviews
dtype: string
splits:
- name: train
num_bytes: 47044074
num_examples: 1208
download_size: 22350411
dataset_size: 47044074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/stackoverflow_5000tasks_.75p | mlfoundations-dev | "2024-12-25T22:08:18Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:36:32Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: seed_tasks
dtype: string
- name: generated_command_prompt
dtype: string
- name: generated_command
dtype: string
- name: command_response
dtype: string
- name: final_instruction
dtype: string
- name: final_response
dtype: string
splits:
- name: train
num_bytes: 768192188
num_examples: 50000
download_size: 389392825
dataset_size: 768192188
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
smmrokn/reddit_dataset_44 | smmrokn | "2024-12-25T21:42:43Z" | 0 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:language-modeling",
"task_ids:text-scoring",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:extractive-qa",
"task_ids:news-articles-summarization",
"multilinguality:multilingual",
"source_datasets:original",
"license:mit",
"region:us"
] | [
"text-classification",
"token-classification",
"question-answering",
"summarization",
"text-generation"
] | "2024-12-25T21:42:40Z" | ---
license: mit
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
- token-classification
- question-answering
- summarization
- text-generation
task_ids:
- sentiment-analysis
- topic-classification
- named-entity-recognition
- language-modeling
- text-scoring
- multi-class-classification
- multi-label-classification
- extractive-qa
- news-articles-summarization
---
# Bittensor Subnet 13 Reddit Dataset
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/bittensor.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
<center>
<img src="https://huggingface.co/datasets/macrocosm-os/images/resolve/main/macrocosmos-black.png" alt="Data-universe: The finest collection of social media data the web has to offer">
</center>
## Dataset Description
- **Repository:** smmrokn/reddit_dataset_44
- **Subnet:** Bittensor Subnet 13
- **Miner Hotkey:** 0
### Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks.
For more information about the dataset, please visit the [official repository](https://github.com/macrocosm-os/data-universe).
### Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs.
For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
### Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
## Dataset Structure
### Data Instances
Each instance represents a single Reddit post or comment with the following fields:
### Data Fields
- `text` (string): The main content of the Reddit post or comment.
- `label` (string): Sentiment or topic category of the content.
- `dataType` (string): Indicates whether the entry is a post or a comment.
- `communityName` (string): The name of the subreddit where the content was posted.
- `datetime` (string): The date when the content was posted or commented.
- `username_encoded` (string): An encoded version of the username to maintain user privacy.
- `url_encoded` (string): An encoded version of any URLs included in the content.
### Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
## Dataset Creation
### Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
### Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
## Considerations for Using the Data
### Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
### Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
## Additional Information
### Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
### Citation Information
If you use this dataset in your research, please cite it as follows:
```
@misc{smmrokn2024datauniversereddit_dataset_44,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={smmrokn},
year={2024},
url={https://huggingface.co/datasets/smmrokn/reddit_dataset_44},
}
```
### Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
## Dataset Statistics
[This section is automatically updated]
- **Total Instances:** 755
- **Date Range:** 2024-12-25T00:00:00Z to 2024-12-25T00:00:00Z
- **Last Updated:** 2024-12-25T21:42:42Z
### Data Distribution
- Posts: 0.00%
- Comments: 100.00%
### Top 10 Subreddits
For full statistics, please refer to the `stats.json` file in the repository.
| Rank | Topic | Total Count | Percentage |
|------|-------|-------------|-------------|
| 1 | r/science | 755 | 100.00% |
## Update History
| Date | New Instances | Total Instances |
|------|---------------|-----------------|
| 2024-12-25T21:42:42Z | 755 | 755 |
|
mlfoundations-dev/stackoverflow_5000tasks_1p | mlfoundations-dev | "2024-12-25T22:08:16Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:43:58Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: completion
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: seed_tasks
dtype: string
- name: generated_command_prompt
dtype: string
- name: generated_command
dtype: string
- name: command_response
dtype: string
- name: final_instruction
dtype: string
- name: final_response
dtype: string
splits:
- name: train
num_bytes: 769102396
num_examples: 50000
download_size: 391887503
dataset_size: 769102396
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ljnlonoljpiljm/laion-gpt4v-from-lavis | ljnlonoljpiljm | "2024-12-25T21:50:36Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:44:58Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: url
dtype: string
- name: image
dtype: image
- name: caption
dtype: string
- name: detailed_caption
dtype: string
- name: tags
sequence: string
- name: dataset
dtype: string
- name: points
sequence:
- name: uuid
dtype: string
- name: x
dtype: float32
- name: y
dtype: float32
- name: label
dtype: string
- name: objects
sequence:
- name: uuid
dtype: string
- name: x_min
dtype: float32
- name: y_min
dtype: float32
- name: x_max
dtype: float32
- name: y_max
dtype: float32
- name: label
dtype: string
- name: image_width
dtype: int32
- name: image_height
dtype: int32
- name: aesthetic_score
dtype: float32
- name: sensitivity_score
dtype: float32
splits:
- name: train
num_bytes: 11023443187.92
num_examples: 217160
download_size: 11154001642
dataset_size: 11023443187.92
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettavuw/D_gen1_run2_llama2-7b_sciabs_doc1000_real96_synt32_vuw | dgambettavuw | "2024-12-25T21:49:21Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:49:13Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 810777
num_examples: 1000
download_size: 430550
dataset_size: 810777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dogtooth/uf_tulu_2_uf_iter2_2 | dogtooth | "2024-12-25T21:54:13Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T21:53:57Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_completion
dtype: string
- name: reference_completion
dtype: string
splits:
- name: train
num_bytes: 687519266
num_examples: 122270
download_size: 266586960
dataset_size: 687519266
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/mergekit-community__mergekit-ties-ykqemwr-details | open-llm-leaderboard | "2024-12-25T22:04:02Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:00:56Z" | ---
pretty_name: Evaluation run of mergekit-community/mergekit-ties-ykqemwr
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mergekit-community/mergekit-ties-ykqemwr](https://huggingface.co/mergekit-community/mergekit-ties-ykqemwr)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/mergekit-community__mergekit-ties-ykqemwr-details\"\
,\n\tname=\"mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-25T22-00-54.914140](https://huggingface.co/datasets/open-llm-leaderboard/mergekit-community__mergekit-ties-ykqemwr-details/blob/main/mergekit-community__mergekit-ties-ykqemwr/results_2024-12-25T22-00-54.914140.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_strict_acc,none\": 0.2846580406654344,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.0194187691064861,\n \"acc_norm,none\"\
: 0.4968218964846284,\n \"acc_norm_stderr,none\": 0.005359611407101173,\n\
\ \"inst_level_strict_acc,none\": 0.4352517985611511,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.3734208776595745,\n\
\ \"acc_stderr,none\": 0.004409977709525806,\n \"prompt_level_loose_acc,none\"\
: 0.3345656192236599,\n \"prompt_level_loose_acc_stderr,none\": 0.02030469137804569,\n\
\ \"inst_level_loose_acc,none\": 0.47961630695443647,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"exact_match,none\":\
\ 0.11858006042296072,\n \"exact_match_stderr,none\": 0.00847775132311359,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.543134872417983,\n \"acc_norm_stderr,none\"\
: 0.006187798698597206,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.832,\n\
\ \"acc_norm_stderr,none\": 0.023692813205492536\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6203208556149733,\n \"acc_norm_stderr,none\"\
: 0.03558443628801667\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.592,\n\
\ \"acc_norm_stderr,none\": 0.03114520984654851\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\":\
\ 0.0316851985511992\n },\n \"leaderboard_bbh_geometric_shapes\":\
\ {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.64,\n \
\ \"acc_norm_stderr,none\": 0.03041876402517494\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\":\
\ 0.031563285061213475\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\":\
\ 0.03151438761115349\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.756,\n \"acc_norm_stderr,none\":\
\ 0.02721799546455311\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.748,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.656,\n \
\ \"acc_norm_stderr,none\": 0.03010450339231644\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.392,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.541095890410959,\n \"acc_norm_stderr,none\": 0.041382249050673094\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.648,\n \
\ \"acc_norm_stderr,none\": 0.030266288057359866\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6797752808988764,\n \"acc_norm_stderr,none\"\
: 0.03506900770722058\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.364,\n\
\ \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\":\
\ 0.029462657598578648\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3221476510067114,\n\
\ \"acc_norm_stderr,none\": 0.013550646507334637,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.31313131313131315,\n \"acc_norm_stderr,none\": 0.033042050878136546\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.31868131868131866,\n\
\ \"acc_norm_stderr,none\": 0.019959754728358932\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33035714285714285,\n \"acc_norm_stderr,none\"\
: 0.022246398347131557\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.2846580406654344,\n \"prompt_level_strict_acc_stderr,none\": 0.0194187691064861,\n\
\ \"inst_level_strict_acc,none\": 0.4352517985611511,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.3345656192236599,\n \"prompt_level_loose_acc_stderr,none\": 0.02030469137804569,\n\
\ \"inst_level_loose_acc,none\": 0.47961630695443647,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.11858006042296072,\n \"exact_match_stderr,none\"\
: 0.00847775132311359,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.1986970684039088,\n\
\ \"exact_match_stderr,none\": 0.022810425277602825\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.07317073170731707,\n \"exact_match_stderr,none\": 0.023577005978097667\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.03787878787878788,\n\
\ \"exact_match_stderr,none\": 0.016679279394712563\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.02142857142857143,\n \"exact_match_stderr,none\": 0.008669434577665551\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.07142857142857142,\n\
\ \"exact_match_stderr,none\": 0.020820824576076338\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.3005181347150259,\n \"exact_match_stderr,none\"\
: 0.033088185944157515\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.05185185185185185,\n \"exact_match_stderr,none\"\
: 0.019154368449050496\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.3734208776595745,\n\
\ \"acc_stderr,none\": 0.004409977709525806\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4193121693121693,\n \"acc_norm_stderr,none\"\
: 0.017520624624081504,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.576,\n\
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.359375,\n \"acc_norm_stderr,none\"\
: 0.0300473227657999\n },\n \"leaderboard_musr_team_allocation\":\
\ {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_strict_acc,none\"\
: 0.2846580406654344,\n \"prompt_level_strict_acc_stderr,none\": 0.0194187691064861,\n\
\ \"acc_norm,none\": 0.4968218964846284,\n \"acc_norm_stderr,none\"\
: 0.005359611407101173,\n \"inst_level_strict_acc,none\": 0.4352517985611511,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\":\
\ 0.3734208776595745,\n \"acc_stderr,none\": 0.004409977709525806,\n \
\ \"prompt_level_loose_acc,none\": 0.3345656192236599,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.02030469137804569,\n \"inst_level_loose_acc,none\": 0.47961630695443647,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"exact_match,none\"\
: 0.11858006042296072,\n \"exact_match_stderr,none\": 0.00847775132311359,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \
\ \"acc_norm,none\": 0.543134872417983,\n \"acc_norm_stderr,none\": 0.006187798698597206,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.832,\n \"acc_norm_stderr,none\": 0.023692813205492536\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6203208556149733,\n \"acc_norm_stderr,none\"\
: 0.03558443628801667\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.64,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.756,\n \"acc_norm_stderr,none\": 0.02721799546455311\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.748,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.656,\n \"acc_norm_stderr,none\": 0.03010450339231644\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.392,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.541095890410959,\n\
\ \"acc_norm_stderr,none\": 0.041382249050673094\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.648,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6797752808988764,\n \"acc_norm_stderr,none\"\
: 0.03506900770722058\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3221476510067114,\n\
\ \"acc_norm_stderr,none\": 0.013550646507334637,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.31313131313131315,\n\
\ \"acc_norm_stderr,none\": 0.033042050878136546\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.31868131868131866,\n \"acc_norm_stderr,none\": 0.019959754728358932\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33035714285714285,\n \"acc_norm_stderr,none\"\
: 0.022246398347131557\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.2846580406654344,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.0194187691064861,\n \"\
inst_level_strict_acc,none\": 0.4352517985611511,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.3345656192236599,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02030469137804569,\n \"inst_level_loose_acc,none\"\
: 0.47961630695443647,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.11858006042296072,\n\
\ \"exact_match_stderr,none\": 0.00847775132311359,\n \"alias\": \"\
\ - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.1986970684039088,\n \"exact_match_stderr,none\": 0.022810425277602825\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.07317073170731707,\n \"exact_match_stderr,none\": 0.023577005978097667\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.03787878787878788,\n \"exact_match_stderr,none\"\
: 0.016679279394712563\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.02142857142857143,\n \"exact_match_stderr,none\"\
: 0.008669434577665551\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.07142857142857142,\n \"exact_match_stderr,none\": 0.020820824576076338\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.3005181347150259,\n \"exact_match_stderr,none\"\
: 0.033088185944157515\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.05185185185185185,\n \"exact_match_stderr,none\": 0.019154368449050496\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.3734208776595745,\n \"acc_stderr,none\": 0.004409977709525806\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4193121693121693,\n\
\ \"acc_norm_stderr,none\": 0.017520624624081504,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.359375,\n \"acc_norm_stderr,none\": 0.0300473227657999\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.324,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ }\n}\n```"
repo_url: https://huggingface.co/mergekit-community/mergekit-ties-ykqemwr
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_ifeval
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-00-54.914140.jsonl'
- config_name: mergekit-community__mergekit-ties-ykqemwr__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T22_00_54.914140
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-00-54.914140.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-00-54.914140.jsonl'
---
# Dataset Card for Evaluation run of mergekit-community/mergekit-ties-ykqemwr
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mergekit-community/mergekit-ties-ykqemwr](https://huggingface.co/mergekit-community/mergekit-ties-ykqemwr)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/mergekit-community__mergekit-ties-ykqemwr-details",
name="mergekit-community__mergekit-ties-ykqemwr__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T22-00-54.914140](https://huggingface.co/datasets/open-llm-leaderboard/mergekit-community__mergekit-ties-ykqemwr-details/blob/main/mergekit-community__mergekit-ties-ykqemwr/results_2024-12-25T22-00-54.914140.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_strict_acc,none": 0.2846580406654344,
"prompt_level_strict_acc_stderr,none": 0.0194187691064861,
"acc_norm,none": 0.4968218964846284,
"acc_norm_stderr,none": 0.005359611407101173,
"inst_level_strict_acc,none": 0.4352517985611511,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.3734208776595745,
"acc_stderr,none": 0.004409977709525806,
"prompt_level_loose_acc,none": 0.3345656192236599,
"prompt_level_loose_acc_stderr,none": 0.02030469137804569,
"inst_level_loose_acc,none": 0.47961630695443647,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.11858006042296072,
"exact_match_stderr,none": 0.00847775132311359,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.543134872417983,
"acc_norm_stderr,none": 0.006187798698597206,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.832,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6203208556149733,
"acc_norm_stderr,none": 0.03558443628801667
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.748,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.656,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.392,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.541095890410959,
"acc_norm_stderr,none": 0.041382249050673094
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.648,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6797752808988764,
"acc_norm_stderr,none": 0.03506900770722058
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3221476510067114,
"acc_norm_stderr,none": 0.013550646507334637,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.31313131313131315,
"acc_norm_stderr,none": 0.033042050878136546
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.31868131868131866,
"acc_norm_stderr,none": 0.019959754728358932
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33035714285714285,
"acc_norm_stderr,none": 0.022246398347131557
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.2846580406654344,
"prompt_level_strict_acc_stderr,none": 0.0194187691064861,
"inst_level_strict_acc,none": 0.4352517985611511,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.3345656192236599,
"prompt_level_loose_acc_stderr,none": 0.02030469137804569,
"inst_level_loose_acc,none": 0.47961630695443647,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.11858006042296072,
"exact_match_stderr,none": 0.00847775132311359,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.1986970684039088,
"exact_match_stderr,none": 0.022810425277602825
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.07317073170731707,
"exact_match_stderr,none": 0.023577005978097667
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.03787878787878788,
"exact_match_stderr,none": 0.016679279394712563
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02142857142857143,
"exact_match_stderr,none": 0.008669434577665551
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.07142857142857142,
"exact_match_stderr,none": 0.020820824576076338
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.3005181347150259,
"exact_match_stderr,none": 0.033088185944157515
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05185185185185185,
"exact_match_stderr,none": 0.019154368449050496
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3734208776595745,
"acc_stderr,none": 0.004409977709525806
},
"leaderboard_musr": {
"acc_norm,none": 0.4193121693121693,
"acc_norm_stderr,none": 0.017520624624081504,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.359375,
"acc_norm_stderr,none": 0.0300473227657999
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
}
},
"leaderboard": {
"prompt_level_strict_acc,none": 0.2846580406654344,
"prompt_level_strict_acc_stderr,none": 0.0194187691064861,
"acc_norm,none": 0.4968218964846284,
"acc_norm_stderr,none": 0.005359611407101173,
"inst_level_strict_acc,none": 0.4352517985611511,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.3734208776595745,
"acc_stderr,none": 0.004409977709525806,
"prompt_level_loose_acc,none": 0.3345656192236599,
"prompt_level_loose_acc_stderr,none": 0.02030469137804569,
"inst_level_loose_acc,none": 0.47961630695443647,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.11858006042296072,
"exact_match_stderr,none": 0.00847775132311359,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.543134872417983,
"acc_norm_stderr,none": 0.006187798698597206,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.832,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6203208556149733,
"acc_norm_stderr,none": 0.03558443628801667
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.748,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.656,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.392,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.541095890410959,
"acc_norm_stderr,none": 0.041382249050673094
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.648,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6797752808988764,
"acc_norm_stderr,none": 0.03506900770722058
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3221476510067114,
"acc_norm_stderr,none": 0.013550646507334637,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.31313131313131315,
"acc_norm_stderr,none": 0.033042050878136546
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.31868131868131866,
"acc_norm_stderr,none": 0.019959754728358932
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33035714285714285,
"acc_norm_stderr,none": 0.022246398347131557
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.2846580406654344,
"prompt_level_strict_acc_stderr,none": 0.0194187691064861,
"inst_level_strict_acc,none": 0.4352517985611511,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.3345656192236599,
"prompt_level_loose_acc_stderr,none": 0.02030469137804569,
"inst_level_loose_acc,none": 0.47961630695443647,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.11858006042296072,
"exact_match_stderr,none": 0.00847775132311359,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.1986970684039088,
"exact_match_stderr,none": 0.022810425277602825
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.07317073170731707,
"exact_match_stderr,none": 0.023577005978097667
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.03787878787878788,
"exact_match_stderr,none": 0.016679279394712563
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.02142857142857143,
"exact_match_stderr,none": 0.008669434577665551
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.07142857142857142,
"exact_match_stderr,none": 0.020820824576076338
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.3005181347150259,
"exact_match_stderr,none": 0.033088185944157515
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05185185185185185,
"exact_match_stderr,none": 0.019154368449050496
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3734208776595745,
"acc_stderr,none": 0.004409977709525806
},
"leaderboard_musr": {
"acc_norm,none": 0.4193121693121693,
"acc_norm_stderr,none": 0.017520624624081504,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.359375,
"acc_norm_stderr,none": 0.0300473227657999
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.324,
"acc_norm_stderr,none": 0.029658294924545567
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
davidberenstein1957/my-distiset-b845cf19 | davidberenstein1957 | "2024-12-25T22:03:00Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-25T22:02:57Z" | ---
size_categories: 1K<n<10K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': business-and-industrial
'1': books-and-literature
'2': beauty-and-fitness
'3': autos-and-vehicles
'4': people-and-society
'5': sports
'6': shopping
'7': online-communities
'8': pets-and-animals
'9': internet-and-telecom
'10': home-and-garden
'11': adult
'12': science
'13': food-and-drink
'14': real-estate
'15': news
'16': jobs-and-education
'17': health
'18': hobbies-and-leisure
'19': games
'20': computers-and-electronics
'21': arts-and-entertainment
'22': travel-and-transportation
'23': finance
'24': law-and-government
'25': sensitive-subjects
splits:
- name: train
num_bytes: 557337
num_examples: 1000
download_size: 335830
dataset_size: 557337
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-b845cf19
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-b845cf19/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/davidberenstein1957/my-distiset-b845cf19/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 17,
"text": "Sexual health is a broad topic that involves physical, emotional, and social well-being. Proper sexual health can lead to better relationships and overall life satisfaction. It\u0027s important for individuals to understand the risks associated with certain behaviors and to practice safe sex to prevent sexually transmitted infections (STIs). Comprehensive education about human sexuality includes understanding consent, respecting individual boundaries, and knowing how to access appropriate healthcare services."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("davidberenstein1957/my-distiset-b845cf19", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("davidberenstein1957/my-distiset-b845cf19")
```
</details>
|
spiralworks/lg_domain_gen_2015_2020_test_4 | spiralworks | "2024-12-25T22:35:28Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:04:17Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: note_id
dtype: string
- name: forum
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: venue
dtype: string
- name: year
dtype: string
- name: abstract
dtype: string
- name: keywords
sequence: string
- name: pdf_url
dtype: string
- name: forum_url
dtype: string
- name: forum_raw_text
dtype: string
- name: reviews_raw_text
dtype: string
- name: average_rating
dtype: float64
- name: average_confidence
dtype: float64
- name: reviews
dtype: string
splits:
- name: train
num_bytes: 106261028
num_examples: 2788
download_size: 52103061
dataset_size: 106261028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZhangShenao/metamath_filtered | ZhangShenao | "2024-12-25T22:24:01Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:08:56Z" | ---
dataset_info:
features:
- name: type
dtype: string
- name: response
dtype: string
- name: original_question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 338668193.79173416
num_examples: 362065
download_size: 173645299
dataset_size: 338668193.79173416
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BEE-spoke-data/bigpatent-all | BEE-spoke-data | "2024-12-25T22:51:42Z" | 0 | 0 | [
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:cc-by-4.0",
"region:us",
"legal"
] | [
"text2text-generation",
"summarization"
] | "2024-12-25T22:10:36Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 38364057181
num_examples: 1207222
- name: validation
num_bytes: 2115665259
num_examples: 67068
- name: test
num_bytes: 2129349762
num_examples: 67072
download_size: 17095133825
dataset_size: 42609072202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: cc-by-4.0
task_categories:
- text2text-generation
- summarization
language:
- en
tags:
- legal
source_dataset: NortheasternUniversity/big_patent
---
# BEE-spoke-data/bigpatent-all
Original [bigpatent](https://huggingface.co/datasets/NortheasternUniversity/big_patent) subset "all" converted to hf format, columns renamed, some cleaning applied to "summary" column. |
darkmater/huggingface-smol-course-preference-tuning-dataset | darkmater | "2024-12-25T22:17:12Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:17:11Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: grouped_generation
sequence: string
- name: model_name
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_input_text_generation_1
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: string
- name: raw_output_text_generation_1
dtype: string
splits:
- name: train
num_bytes: 3063
num_examples: 1
download_size: 21829
dataset_size: 3063
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dogtooth/uf_tulu_3_uf_iter2_3 | dogtooth | "2024-12-25T22:17:23Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:17:12Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_completion
dtype: string
- name: reference_completion
dtype: string
splits:
- name: train
num_bytes: 1042377812
num_examples: 183405
download_size: 345497555
dataset_size: 1042377812
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/Daemontatox__RA_Reasoner-details | open-llm-leaderboard | "2024-12-25T22:20:38Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:17:45Z" | ---
pretty_name: Evaluation run of Daemontatox/RA_Reasoner
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Daemontatox/RA_Reasoner](https://huggingface.co/Daemontatox/RA_Reasoner)\nThe\
\ dataset is composed of 38 configuration(s), each one corresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/Daemontatox__RA_Reasoner-details\"\
,\n\tname=\"Daemontatox__RA_Reasoner__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-12-25T22-17-44.601562](https://huggingface.co/datasets/open-llm-leaderboard/Daemontatox__RA_Reasoner-details/blob/main/Daemontatox__RA_Reasoner/results_2024-12-25T22-17-44.601562.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_loose_acc,none\": 0.657074340527578,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"acc_norm,none\": 0.5410559086781683,\n \"acc_norm_stderr,none\"\
: 0.005305430320587412,\n \"exact_match,none\": 0.20090634441087613,\n\
\ \"exact_match_stderr,none\": 0.010296205842383702,\n \"\
acc,none\": 0.43001994680851063,\n \"acc_stderr,none\": 0.004513602048699036,\n\
\ \"prompt_level_loose_acc,none\": 0.5415896487985212,\n \"\
prompt_level_loose_acc_stderr,none\": 0.021442010560476468,\n \"prompt_level_strict_acc,none\"\
: 0.5009242144177449,\n \"prompt_level_strict_acc_stderr,none\": 0.021516537387842545,\n\
\ \"inst_level_strict_acc,none\": 0.617505995203837,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6035410519007117,\n\
\ \"acc_norm_stderr,none\": 0.0061006515040985506,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.884,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6631016042780749,\n\
\ \"acc_norm_stderr,none\": 0.03465636737116503\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.624,\n \"acc_norm_stderr,none\":\
\ 0.03069633626739458\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.62,\n\
\ \"acc_norm_stderr,none\": 0.030760116042626098\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.612,\n \"acc_norm_stderr,none\":\
\ 0.030881038748993974\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.564,\n \"acc_norm_stderr,none\": 0.03142556706028136\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.648,\n \"acc_norm_stderr,none\":\
\ 0.030266288057359866\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.589041095890411,\n \"acc_norm_stderr,none\": 0.04085902451640228\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.708,\n \"acc_norm_stderr,none\": 0.028814320402205634\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.668,\n \
\ \"acc_norm_stderr,none\": 0.029844039047465857\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.7247191011235955,\n \"acc_norm_stderr,none\"\
: 0.03357269922538229\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.648,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.52,\n\
\ \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.228,\n \"acc_norm_stderr,none\":\
\ 0.026587432487268498\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\":\
\ 0.030266288057359866\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3313758389261745,\n\
\ \"acc_norm_stderr,none\": 0.013618675523707283,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3787878787878788,\n \"acc_norm_stderr,none\": 0.03456088731993742\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.34615384615384615,\n\
\ \"acc_norm_stderr,none\": 0.020378589274523313\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2924107142857143,\n \"acc_norm_stderr,none\"\
: 0.02151461125992856\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.5009242144177449,\n \"prompt_level_strict_acc_stderr,none\": 0.02151653738784254,\n\
\ \"inst_level_strict_acc,none\": 0.617505995203837,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.5415896487985212,\n \"prompt_level_loose_acc_stderr,none\": 0.021442010560476468,\n\
\ \"inst_level_loose_acc,none\": 0.657074340527578,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.20090634441087613,\n \"exact_match_stderr,none\": 0.010296205842383702,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \
\ \"exact_match,none\": 0.38436482084690554,\n \"exact_match_stderr,none\"\
: 0.027808196077636186\n },\n \"leaderboard_math_counting_and_prob_hard\"\
: {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.15447154471544716,\n \"exact_match_stderr,none\"\
: 0.03271963447587711\n },\n \"leaderboard_math_geometry_hard\": {\n\
\ \"alias\": \" - leaderboard_math_geometry_hard\",\n \"\
exact_match,none\": 0.07575757575757576,\n \"exact_match_stderr,none\"\
: 0.023119068741795586\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.04285714285714286,\n \"exact_match_stderr,none\"\
: 0.012125450612513602\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.2077922077922078,\n \"exact_match_stderr,none\"\
: 0.03280110453395389\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.35751295336787564,\n \"exact_match_stderr,none\"\
: 0.03458816042181008\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.044444444444444446,\n \"exact_match_stderr,none\"\
: 0.01780263602032457\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.43001994680851063,\n\
\ \"acc_stderr,none\": 0.004513602048699036\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3955026455026455,\n \"acc_norm_stderr,none\"\
: 0.017448683517391476,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.528,\n\
\ \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.2890625,\n \"acc_norm_stderr,none\"\
: 0.02838843806999465\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.372,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_loose_acc,none\"\
: 0.657074340527578,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \
\ \"acc_norm,none\": 0.5410559086781683,\n \"acc_norm_stderr,none\"\
: 0.005305430320587412,\n \"exact_match,none\": 0.20090634441087613,\n \
\ \"exact_match_stderr,none\": 0.010296205842383702,\n \"acc,none\"\
: 0.43001994680851063,\n \"acc_stderr,none\": 0.004513602048699036,\n \
\ \"prompt_level_loose_acc,none\": 0.5415896487985212,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.021442010560476468,\n \"prompt_level_strict_acc,none\": 0.5009242144177449,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021516537387842545,\n \
\ \"inst_level_strict_acc,none\": 0.617505995203837,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6035410519007117,\n \"acc_norm_stderr,none\"\
: 0.0061006515040985506,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.884,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6631016042780749,\n \"acc_norm_stderr,none\"\
: 0.03465636737116503\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.624,\n \"acc_norm_stderr,none\": 0.03069633626739458\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.612,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.564,\n \"acc_norm_stderr,none\": 0.03142556706028136\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.648,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.589041095890411,\n\
\ \"acc_norm_stderr,none\": 0.04085902451640228\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.708,\n \"acc_norm_stderr,none\": 0.028814320402205634\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.668,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.7247191011235955,\n \"acc_norm_stderr,none\"\
: 0.03357269922538229\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.648,\n \"acc_norm_stderr,none\": 0.030266288057359866\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.228,\n \"acc_norm_stderr,none\": 0.026587432487268498\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.352,\n \"acc_norm_stderr,none\": 0.030266288057359866\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3313758389261745,\n\
\ \"acc_norm_stderr,none\": 0.013618675523707283,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3787878787878788,\n\
\ \"acc_norm_stderr,none\": 0.03456088731993742\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.34615384615384615,\n \"acc_norm_stderr,none\": 0.020378589274523313\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2924107142857143,\n \"acc_norm_stderr,none\"\
: 0.02151461125992856\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.5009242144177449,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02151653738784254,\n \
\ \"inst_level_strict_acc,none\": 0.617505995203837,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.5415896487985212,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.021442010560476468,\n \"inst_level_loose_acc,none\"\
: 0.657074340527578,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.20090634441087613,\n\
\ \"exact_match_stderr,none\": 0.010296205842383702,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.38436482084690554,\n \"exact_match_stderr,none\": 0.027808196077636186\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.15447154471544716,\n \"exact_match_stderr,none\": 0.03271963447587711\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.07575757575757576,\n \"exact_match_stderr,none\"\
: 0.023119068741795586\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.04285714285714286,\n \"exact_match_stderr,none\"\
: 0.012125450612513602\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.2077922077922078,\n \"exact_match_stderr,none\": 0.03280110453395389\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.35751295336787564,\n \"exact_match_stderr,none\"\
: 0.03458816042181008\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.044444444444444446,\n \"exact_match_stderr,none\": 0.01780263602032457\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.43001994680851063,\n \"acc_stderr,none\": 0.004513602048699036\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.3955026455026455,\n\
\ \"acc_norm_stderr,none\": 0.017448683517391476,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.2890625,\n \"acc_norm_stderr,none\": 0.02838843806999465\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.372,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Daemontatox/RA_Reasoner
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_ifeval
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-17-44.601562.jsonl'
- config_name: Daemontatox__RA_Reasoner__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T22_17_44.601562
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-17-44.601562.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-17-44.601562.jsonl'
---
# Dataset Card for Evaluation run of Daemontatox/RA_Reasoner
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Daemontatox/RA_Reasoner](https://huggingface.co/Daemontatox/RA_Reasoner)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/Daemontatox__RA_Reasoner-details",
name="Daemontatox__RA_Reasoner__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T22-17-44.601562](https://huggingface.co/datasets/open-llm-leaderboard/Daemontatox__RA_Reasoner-details/blob/main/Daemontatox__RA_Reasoner/results_2024-12-25T22-17-44.601562.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_loose_acc,none": 0.657074340527578,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.5410559086781683,
"acc_norm_stderr,none": 0.005305430320587412,
"exact_match,none": 0.20090634441087613,
"exact_match_stderr,none": 0.010296205842383702,
"acc,none": 0.43001994680851063,
"acc_stderr,none": 0.004513602048699036,
"prompt_level_loose_acc,none": 0.5415896487985212,
"prompt_level_loose_acc_stderr,none": 0.021442010560476468,
"prompt_level_strict_acc,none": 0.5009242144177449,
"prompt_level_strict_acc_stderr,none": 0.021516537387842545,
"inst_level_strict_acc,none": 0.617505995203837,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6035410519007117,
"acc_norm_stderr,none": 0.0061006515040985506,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.884,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6631016042780749,
"acc_norm_stderr,none": 0.03465636737116503
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.624,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.648,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.589041095890411,
"acc_norm_stderr,none": 0.04085902451640228
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.708,
"acc_norm_stderr,none": 0.028814320402205634
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7247191011235955,
"acc_norm_stderr,none": 0.03357269922538229
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.648,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.228,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3313758389261745,
"acc_norm_stderr,none": 0.013618675523707283,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3787878787878788,
"acc_norm_stderr,none": 0.03456088731993742
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.34615384615384615,
"acc_norm_stderr,none": 0.020378589274523313
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2924107142857143,
"acc_norm_stderr,none": 0.02151461125992856
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5009242144177449,
"prompt_level_strict_acc_stderr,none": 0.02151653738784254,
"inst_level_strict_acc,none": 0.617505995203837,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5415896487985212,
"prompt_level_loose_acc_stderr,none": 0.021442010560476468,
"inst_level_loose_acc,none": 0.657074340527578,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.20090634441087613,
"exact_match_stderr,none": 0.010296205842383702,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.38436482084690554,
"exact_match_stderr,none": 0.027808196077636186
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.15447154471544716,
"exact_match_stderr,none": 0.03271963447587711
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.07575757575757576,
"exact_match_stderr,none": 0.023119068741795586
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.04285714285714286,
"exact_match_stderr,none": 0.012125450612513602
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.2077922077922078,
"exact_match_stderr,none": 0.03280110453395389
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.35751295336787564,
"exact_match_stderr,none": 0.03458816042181008
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.044444444444444446,
"exact_match_stderr,none": 0.01780263602032457
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43001994680851063,
"acc_stderr,none": 0.004513602048699036
},
"leaderboard_musr": {
"acc_norm,none": 0.3955026455026455,
"acc_norm_stderr,none": 0.017448683517391476,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.2890625,
"acc_norm_stderr,none": 0.02838843806999465
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.372,
"acc_norm_stderr,none": 0.03063032594455827
}
},
"leaderboard": {
"inst_level_loose_acc,none": 0.657074340527578,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.5410559086781683,
"acc_norm_stderr,none": 0.005305430320587412,
"exact_match,none": 0.20090634441087613,
"exact_match_stderr,none": 0.010296205842383702,
"acc,none": 0.43001994680851063,
"acc_stderr,none": 0.004513602048699036,
"prompt_level_loose_acc,none": 0.5415896487985212,
"prompt_level_loose_acc_stderr,none": 0.021442010560476468,
"prompt_level_strict_acc,none": 0.5009242144177449,
"prompt_level_strict_acc_stderr,none": 0.021516537387842545,
"inst_level_strict_acc,none": 0.617505995203837,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6035410519007117,
"acc_norm_stderr,none": 0.0061006515040985506,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.884,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6631016042780749,
"acc_norm_stderr,none": 0.03465636737116503
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.624,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.648,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.589041095890411,
"acc_norm_stderr,none": 0.04085902451640228
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.708,
"acc_norm_stderr,none": 0.028814320402205634
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.668,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7247191011235955,
"acc_norm_stderr,none": 0.03357269922538229
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.648,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.228,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.352,
"acc_norm_stderr,none": 0.030266288057359866
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3313758389261745,
"acc_norm_stderr,none": 0.013618675523707283,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3787878787878788,
"acc_norm_stderr,none": 0.03456088731993742
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.34615384615384615,
"acc_norm_stderr,none": 0.020378589274523313
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2924107142857143,
"acc_norm_stderr,none": 0.02151461125992856
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5009242144177449,
"prompt_level_strict_acc_stderr,none": 0.02151653738784254,
"inst_level_strict_acc,none": 0.617505995203837,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5415896487985212,
"prompt_level_loose_acc_stderr,none": 0.021442010560476468,
"inst_level_loose_acc,none": 0.657074340527578,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.20090634441087613,
"exact_match_stderr,none": 0.010296205842383702,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.38436482084690554,
"exact_match_stderr,none": 0.027808196077636186
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.15447154471544716,
"exact_match_stderr,none": 0.03271963447587711
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.07575757575757576,
"exact_match_stderr,none": 0.023119068741795586
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.04285714285714286,
"exact_match_stderr,none": 0.012125450612513602
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.2077922077922078,
"exact_match_stderr,none": 0.03280110453395389
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.35751295336787564,
"exact_match_stderr,none": 0.03458816042181008
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.044444444444444446,
"exact_match_stderr,none": 0.01780263602032457
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43001994680851063,
"acc_stderr,none": 0.004513602048699036
},
"leaderboard_musr": {
"acc_norm,none": 0.3955026455026455,
"acc_norm_stderr,none": 0.017448683517391476,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.2890625,
"acc_norm_stderr,none": 0.02838843806999465
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.372,
"acc_norm_stderr,none": 0.03063032594455827
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
BEE-spoke-data/reddit-title-body-hf | BEE-spoke-data | "2024-12-25T22:56:34Z" | 0 | 0 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"license:odc-by",
"region:us"
] | [
"text-generation",
"text2text-generation"
] | "2024-12-25T22:20:06Z" | ---
dataset_info:
features:
- name: title
dtype: string
- name: body
dtype: string
- name: subreddit
dtype: string
splits:
- name: train
num_bytes: 93764255230
num_examples: 127445911
download_size: 62576730319
dataset_size: 93764255230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
task_categories:
- text-generation
- text2text-generation
---
# reddit-title-body-hf
[sentence-transformers/reddit-title-body](https://huggingface.co/datasets/sentence-transformers/reddit-title-body) in parquet format |
MedAliFarhat/Mosaics_Final_Version | MedAliFarhat | "2024-12-25T22:27:19Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:23:55Z" | ---
dataset_info:
features:
- name: messages
dtype: string
- name: images
dtype: image
splits:
- name: train
num_bytes: 586915884.1044776
num_examples: 180
- name: test
num_bytes: 40066940.895522386
num_examples: 21
download_size: 625313984
dataset_size: 626982825.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
martineden/structurized_squad | martineden | "2024-12-25T23:01:46Z" | 0 | 0 | [
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"arxiv:2407.16434",
"arxiv:1606.05250",
"region:us",
"squad",
"question-answering",
"structurization"
] | [
"question-answering"
] | "2024-12-25T22:26:15Z" | ---
dataset_info:
features:
- name: squad_id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: structurized_context
dtype: string
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start_index
dtype: int64
- name: structurized_answer_start_index
dtype: int64
- name: answers
struct:
- name: answer_start
sequence: int32
- name: text
sequence: string
- name: structurized_answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 183235360
num_examples: 87599
- name: validation
num_bytes: 23392412
num_examples: 10570
download_size: 35504447
dataset_size: 206627772
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: apache-2.0
task_categories:
- question-answering
size_categories:
- 10K<n<100K
language:
- en
tags:
- squad
- question-answering
- structurization
---
# Structurized SQuAD Dataset
This dataset is derived from Rajpurkar/SQuAD dataset by applying the structurization process described in the "Enhancing LLM's Cognition via Structurization" (Liu et al., 2024) article.
To check the structurization process details: https://arxiv.org/abs/2407.16434
## Construction process
* For the structurization process, Llama-3.1-8B-Instruct and Llama-3.3-70B-Instruct models were used.
* Some structurized context data may missing some answers or information.
* Structurized answer start indexes are obtained via Python string methods (e.g. .find()).
## Description
* The dataset can be used in place of SQuAD dataset, having the same columns and values.
* In addition to SQuAD data, there are extra columns:
i. "answer_text" and "answer_start_index" are the values derived from "answers" column of SQuAD.
ii. "structurized_context" is derived from "context" column via applying structurization process.
iii. "structurized_answer_start_index" is obtained via Python string methods from "structurized_context", searching "answer_text" within it.
iv. "structurized_answers" is generated from "answer_text" and "structurized_answer_start_index", to comply with the format of original "answers" column, so that preprocessing should be similar to the SQuAD dataset.
## Suggestions:
* If you want to fine-tune model based on structurized contexts on extractive question-answering task, structurized_answer_start_index = -1 data might be filtered out.
* Some structurized contexts has the answer within themselves, but identified as "-1" (no answer) due to the punctuation issues in the output of LLMs (e.g. "words, more words, and words"; the last comma causes that issue.).
## References:
Liu, K., Fu, Z., Chen, C., Zhang, W., Jiang, R., Zhou, F., Chen, Y., Wu, Y., & Ye, J. (2024). Enhancing LLM's Cognition via Structurization. http://arxiv.org/abs/2407.16434
Meta. (2024a, July 23). Llama-3.1-8B-Instruct Model Card. Retrieved from https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md
Meta. (2024b, December 6). Llama-3.3-70B-Instruct Model Card. Retrieved from https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md
Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). SQuAD: 100,000+ Questions for Machine Comprehension of Text. https://arxiv.org/abs/1606.05250 |
meteorinc/koch_test | meteorinc | "2024-12-25T22:26:22Z" | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | "2024-12-25T22:26:18Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "koch",
"total_episodes": 2,
"total_frames": 1166,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
1231czx/llama31_40k_ep3tmp10 | 1231czx | "2024-12-25T22:29:35Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:29:33Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 20641585
num_examples: 5000
download_size: 8230693
dataset_size: 20641585
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettavuw/D_gen2_run2_llama2-7b_sciabs_doc1000_real96_synt32_vuw | dgambettavuw | "2024-12-25T22:29:47Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:29:43Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 801484
num_examples: 1000
download_size: 426509
dataset_size: 801484
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama31_40k_ep3tmp07 | 1231czx | "2024-12-25T22:30:09Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:30:07Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 57964880
num_examples: 15000
download_size: 19888362
dataset_size: 57964880
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama31_40k_ep3tmp0 | 1231czx | "2024-12-25T22:30:24Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:30:23Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 19109118
num_examples: 5000
download_size: 5954243
dataset_size: 19109118
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Shwetasingh123/llama_8B_4_bit_math_mean_logprob | Shwetasingh123 | "2024-12-25T22:34:07Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:34:05Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: unique_id
dtype: string
- name: generated_chain
dtype: string
- name: generated_answer
dtype: string
- name: is_correct
dtype: bool
- name: epoch
dtype: int64
- name: mean_logprob
dtype: float64
splits:
- name: train
num_bytes: 3528306
num_examples: 818
download_size: 2301890
dataset_size: 3528306
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama31_test_chat_format_20k_only_firstwrong_and_regular_first_corr10k_ep3tmp10 | 1231czx | "2024-12-25T22:35:19Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:35:18Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 19606359
num_examples: 5000
download_size: 7751560
dataset_size: 19606359
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama31_chat_format_20k_ep3tmp10 | 1231czx | "2024-12-25T22:36:33Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:36:32Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 18773857
num_examples: 5000
download_size: 7282699
dataset_size: 18773857
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/qingy2024__Fusion4-14B-Instruct-details | open-llm-leaderboard | "2024-12-25T22:41:20Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:38:09Z" | ---
pretty_name: Evaluation run of qingy2024/Fusion4-14B-Instruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [qingy2024/Fusion4-14B-Instruct](https://huggingface.co/qingy2024/Fusion4-14B-Instruct)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/qingy2024__Fusion4-14B-Instruct-details\"\
,\n\tname=\"qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-25T22-38-08.784726](https://huggingface.co/datasets/open-llm-leaderboard/qingy2024__Fusion4-14B-Instruct-details/blob/main/qingy2024__Fusion4-14B-Instruct/results_2024-12-25T22-38-08.784726.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc,none\": 0.5193650265957447,\n \"acc_stderr,none\"\
: 0.004555050244694195,\n \"exact_match,none\": 0.3391238670694864,\n\
\ \"exact_match_stderr,none\": 0.011834946738836881,\n \"\
inst_level_strict_acc,none\": 0.8033573141486811,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc_norm,none\": 0.580749772992606,\n \"acc_norm_stderr,none\"\
: 0.005096783365172785,\n \"prompt_level_loose_acc,none\": 0.744916820702403,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.018758491950414184,\n \
\ \"prompt_level_strict_acc,none\": 0.7264325323475046,\n \"\
prompt_level_strict_acc_stderr,none\": 0.019183727107392846,\n \"inst_level_loose_acc,none\"\
: 0.8177458033573142,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6521437250477348,\n \"acc_norm_stderr,none\"\
: 0.005772767567151319,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.9,\n\
\ \"acc_norm_stderr,none\": 0.01901172751573434\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6470588235294118,\n \"acc_norm_stderr,none\"\
: 0.03504019983419238\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.644,\n\
\ \"acc_norm_stderr,none\": 0.0303436806571532\n },\n \"\
leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.664,\n \"acc_norm_stderr,none\":\
\ 0.029933259094191533\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.76,\n \
\ \"acc_norm_stderr,none\": 0.027065293652238982\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.948,\n \"acc_norm_stderr,none\":\
\ 0.014070391025641678\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.728,\n \
\ \"acc_norm_stderr,none\": 0.028200088296309975\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6643835616438356,\n \"acc_norm_stderr,none\": 0.039214533254314086\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.824,\n \
\ \"acc_norm_stderr,none\": 0.024133497525457123\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.624,\n \"acc_norm_stderr,none\": 0.03069633626739458\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.8146067415730337,\n \"acc_norm_stderr,none\"\
: 0.029210186884630146\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.856,\n\
\ \"acc_norm_stderr,none\": 0.022249407735450245\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\":\
\ 0.02925692860650181\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.33053691275167785,\n\
\ \"acc_norm_stderr,none\": 0.013641593555038299,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3333333333333333,\n \"acc_norm_stderr,none\": 0.033586181457325226\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.326007326007326,\n\
\ \"acc_norm_stderr,none\": 0.0200790433174674\n },\n \"\
leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33482142857142855,\n \"acc_norm_stderr,none\"\
: 0.02232142857142857\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7264325323475046,\n \"prompt_level_strict_acc_stderr,none\": 0.019183727107392846,\n\
\ \"inst_level_strict_acc,none\": 0.8033573141486811,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.744916820702403,\n \"prompt_level_loose_acc_stderr,none\": 0.018758491950414184,\n\
\ \"inst_level_loose_acc,none\": 0.8177458033573142,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.3391238670694864,\n \"exact_match_stderr,none\"\
: 0.011834946738836881,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.6091205211726385,\n\
\ \"exact_match_stderr,none\": 0.027894098976471507\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.3089430894308943,\n \"exact_match_stderr,none\": 0.04183273258787621\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.19696969696969696,\n\
\ \"exact_match_stderr,none\": 0.03474801718164943\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.11428571428571428,\n \"exact_match_stderr,none\": 0.019047619047619046\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.3181818181818182,\n\
\ \"exact_match_stderr,none\": 0.03765531225361428\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.5233160621761658,\n \"exact_match_stderr,none\"\
: 0.03604513672442202\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.11851851851851852,\n \"exact_match_stderr,none\"\
: 0.027922050250639006\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.5193650265957447,\n\
\ \"acc_stderr,none\": 0.004555050244694195\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4312169312169312,\n \"acc_norm_stderr,none\"\
: 0.017413586664944487,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.576,\n\
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.26171875,\n \"acc_norm_stderr,none\"\
: 0.027526959754524398\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ }\n },\n \"leaderboard\": {\n \"acc,none\": 0.5193650265957447,\n\
\ \"acc_stderr,none\": 0.004555050244694195,\n \"exact_match,none\"\
: 0.3391238670694864,\n \"exact_match_stderr,none\": 0.011834946738836881,\n\
\ \"inst_level_strict_acc,none\": 0.8033573141486811,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc_norm,none\": 0.580749772992606,\n \"acc_norm_stderr,none\"\
: 0.005096783365172785,\n \"prompt_level_loose_acc,none\": 0.744916820702403,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.018758491950414184,\n \
\ \"prompt_level_strict_acc,none\": 0.7264325323475046,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.019183727107392846,\n \"inst_level_loose_acc,none\": 0.8177458033573142,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6521437250477348,\n\
\ \"acc_norm_stderr,none\": 0.005772767567151319,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.9,\n \"acc_norm_stderr,none\": 0.01901172751573434\n },\n \"leaderboard_bbh_causal_judgement\"\
: {\n \"alias\": \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\"\
: 0.6470588235294118,\n \"acc_norm_stderr,none\": 0.03504019983419238\n \
\ },\n \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" -\
\ leaderboard_bbh_date_understanding\",\n \"acc_norm,none\": 0.684,\n \
\ \"acc_norm_stderr,none\": 0.02946265759857865\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\"\
: 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n },\n \"leaderboard_bbh_formal_fallacies\"\
: {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\"\
: 0.664,\n \"acc_norm_stderr,none\": 0.029933259094191533\n },\n \"\
leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.948,\n \"acc_norm_stderr,none\": 0.014070391025641678\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.728,\n \"acc_norm_stderr,none\": 0.028200088296309975\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6643835616438356,\n\
\ \"acc_norm_stderr,none\": 0.039214533254314086\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.824,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.624,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.8146067415730337,\n \"acc_norm_stderr,none\"\
: 0.029210186884630146\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.856,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.33053691275167785,\n\
\ \"acc_norm_stderr,none\": 0.013641593555038299,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3333333333333333,\n\
\ \"acc_norm_stderr,none\": 0.033586181457325226\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.326007326007326,\n \"acc_norm_stderr,none\": 0.0200790433174674\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.33482142857142855,\n \"acc_norm_stderr,none\"\
: 0.02232142857142857\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7264325323475046,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.019183727107392846,\n \
\ \"inst_level_strict_acc,none\": 0.8033573141486811,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.744916820702403,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.018758491950414184,\n \"inst_level_loose_acc,none\"\
: 0.8177458033573142,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.3391238670694864,\n\
\ \"exact_match_stderr,none\": 0.011834946738836881,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.6091205211726385,\n \"exact_match_stderr,none\": 0.027894098976471507\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.3089430894308943,\n \"exact_match_stderr,none\": 0.04183273258787621\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.19696969696969696,\n \"exact_match_stderr,none\"\
: 0.03474801718164943\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.11428571428571428,\n \"exact_match_stderr,none\"\
: 0.019047619047619046\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.3181818181818182,\n \"exact_match_stderr,none\": 0.03765531225361428\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.5233160621761658,\n \"exact_match_stderr,none\"\
: 0.03604513672442202\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.11851851851851852,\n \"exact_match_stderr,none\": 0.027922050250639006\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.5193650265957447,\n \"acc_stderr,none\": 0.004555050244694195\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4312169312169312,\n\
\ \"acc_norm_stderr,none\": 0.017413586664944487,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.26171875,\n \"acc_norm_stderr,none\": 0.027526959754524398\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ }\n}\n```"
repo_url: https://huggingface.co/qingy2024/Fusion4-14B-Instruct
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_ifeval
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-38-08.784726.jsonl'
- config_name: qingy2024__Fusion4-14B-Instruct__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T22_38_08.784726
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-38-08.784726.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-38-08.784726.jsonl'
---
# Dataset Card for Evaluation run of qingy2024/Fusion4-14B-Instruct
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [qingy2024/Fusion4-14B-Instruct](https://huggingface.co/qingy2024/Fusion4-14B-Instruct)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/qingy2024__Fusion4-14B-Instruct-details",
name="qingy2024__Fusion4-14B-Instruct__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T22-38-08.784726](https://huggingface.co/datasets/open-llm-leaderboard/qingy2024__Fusion4-14B-Instruct-details/blob/main/qingy2024__Fusion4-14B-Instruct/results_2024-12-25T22-38-08.784726.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc,none": 0.5193650265957447,
"acc_stderr,none": 0.004555050244694195,
"exact_match,none": 0.3391238670694864,
"exact_match_stderr,none": 0.011834946738836881,
"inst_level_strict_acc,none": 0.8033573141486811,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.580749772992606,
"acc_norm_stderr,none": 0.005096783365172785,
"prompt_level_loose_acc,none": 0.744916820702403,
"prompt_level_loose_acc_stderr,none": 0.018758491950414184,
"prompt_level_strict_acc,none": 0.7264325323475046,
"prompt_level_strict_acc_stderr,none": 0.019183727107392846,
"inst_level_loose_acc,none": 0.8177458033573142,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6521437250477348,
"acc_norm_stderr,none": 0.005772767567151319,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.9,
"acc_norm_stderr,none": 0.01901172751573434
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6470588235294118,
"acc_norm_stderr,none": 0.03504019983419238
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.948,
"acc_norm_stderr,none": 0.014070391025641678
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.728,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6643835616438356,
"acc_norm_stderr,none": 0.039214533254314086
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.624,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.8146067415730337,
"acc_norm_stderr,none": 0.029210186884630146
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.856,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_gpqa": {
"acc_norm,none": 0.33053691275167785,
"acc_norm_stderr,none": 0.013641593555038299,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3333333333333333,
"acc_norm_stderr,none": 0.033586181457325226
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.326007326007326,
"acc_norm_stderr,none": 0.0200790433174674
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33482142857142855,
"acc_norm_stderr,none": 0.02232142857142857
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7264325323475046,
"prompt_level_strict_acc_stderr,none": 0.019183727107392846,
"inst_level_strict_acc,none": 0.8033573141486811,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.744916820702403,
"prompt_level_loose_acc_stderr,none": 0.018758491950414184,
"inst_level_loose_acc,none": 0.8177458033573142,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.3391238670694864,
"exact_match_stderr,none": 0.011834946738836881,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.6091205211726385,
"exact_match_stderr,none": 0.027894098976471507
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.3089430894308943,
"exact_match_stderr,none": 0.04183273258787621
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.19696969696969696,
"exact_match_stderr,none": 0.03474801718164943
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.11428571428571428,
"exact_match_stderr,none": 0.019047619047619046
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.3181818181818182,
"exact_match_stderr,none": 0.03765531225361428
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.5233160621761658,
"exact_match_stderr,none": 0.03604513672442202
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.11851851851851852,
"exact_match_stderr,none": 0.027922050250639006
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.5193650265957447,
"acc_stderr,none": 0.004555050244694195
},
"leaderboard_musr": {
"acc_norm,none": 0.4312169312169312,
"acc_norm_stderr,none": 0.017413586664944487,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.26171875,
"acc_norm_stderr,none": 0.027526959754524398
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
}
},
"leaderboard": {
"acc,none": 0.5193650265957447,
"acc_stderr,none": 0.004555050244694195,
"exact_match,none": 0.3391238670694864,
"exact_match_stderr,none": 0.011834946738836881,
"inst_level_strict_acc,none": 0.8033573141486811,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.580749772992606,
"acc_norm_stderr,none": 0.005096783365172785,
"prompt_level_loose_acc,none": 0.744916820702403,
"prompt_level_loose_acc_stderr,none": 0.018758491950414184,
"prompt_level_strict_acc,none": 0.7264325323475046,
"prompt_level_strict_acc_stderr,none": 0.019183727107392846,
"inst_level_loose_acc,none": 0.8177458033573142,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6521437250477348,
"acc_norm_stderr,none": 0.005772767567151319,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.9,
"acc_norm_stderr,none": 0.01901172751573434
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6470588235294118,
"acc_norm_stderr,none": 0.03504019983419238
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.664,
"acc_norm_stderr,none": 0.029933259094191533
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.948,
"acc_norm_stderr,none": 0.014070391025641678
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.728,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6643835616438356,
"acc_norm_stderr,none": 0.039214533254314086
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.624,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.8146067415730337,
"acc_norm_stderr,none": 0.029210186884630146
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.856,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_gpqa": {
"acc_norm,none": 0.33053691275167785,
"acc_norm_stderr,none": 0.013641593555038299,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3333333333333333,
"acc_norm_stderr,none": 0.033586181457325226
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.326007326007326,
"acc_norm_stderr,none": 0.0200790433174674
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.33482142857142855,
"acc_norm_stderr,none": 0.02232142857142857
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7264325323475046,
"prompt_level_strict_acc_stderr,none": 0.019183727107392846,
"inst_level_strict_acc,none": 0.8033573141486811,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.744916820702403,
"prompt_level_loose_acc_stderr,none": 0.018758491950414184,
"inst_level_loose_acc,none": 0.8177458033573142,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.3391238670694864,
"exact_match_stderr,none": 0.011834946738836881,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.6091205211726385,
"exact_match_stderr,none": 0.027894098976471507
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.3089430894308943,
"exact_match_stderr,none": 0.04183273258787621
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.19696969696969696,
"exact_match_stderr,none": 0.03474801718164943
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.11428571428571428,
"exact_match_stderr,none": 0.019047619047619046
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.3181818181818182,
"exact_match_stderr,none": 0.03765531225361428
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.5233160621761658,
"exact_match_stderr,none": 0.03604513672442202
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.11851851851851852,
"exact_match_stderr,none": 0.027922050250639006
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.5193650265957447,
"acc_stderr,none": 0.004555050244694195
},
"leaderboard_musr": {
"acc_norm,none": 0.4312169312169312,
"acc_norm_stderr,none": 0.017413586664944487,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.26171875,
"acc_norm_stderr,none": 0.027526959754524398
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
lomov/29labels_v1 | lomov | "2024-12-25T22:39:26Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-25T22:38:42Z" | ---
license: apache-2.0
---
|
open-llm-leaderboard/Daemontatox__PathfinderAI-details | open-llm-leaderboard | "2024-12-25T22:42:20Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:39:31Z" | ---
pretty_name: Evaluation run of Daemontatox/PathfinderAI
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Daemontatox/PathfinderAI](https://huggingface.co/Daemontatox/PathfinderAI)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/Daemontatox__PathfinderAI-details\"\
,\n\tname=\"Daemontatox__PathfinderAI__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-12-25T22-39-30.466377](https://huggingface.co/datasets/open-llm-leaderboard/Daemontatox__PathfinderAI-details/blob/main/Daemontatox__PathfinderAI/results_2024-12-25T22-39-30.466377.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_strict_acc,none\": 0.4292565947242206,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.37153419593345655,\n \"prompt_level_loose_acc_stderr,none\": 0.020794253888707582,\n\
\ \"inst_level_loose_acc,none\": 0.4772182254196642,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\": 0.6038396679206123,\n\
\ \"acc_norm_stderr,none\": 0.005114412741319218,\n \"prompt_level_strict_acc,none\"\
: 0.3197781885397412,\n \"prompt_level_strict_acc_stderr,none\": 0.02007025155658027,\n\
\ \"exact_match,none\": 0.47583081570996977,\n \"exact_match_stderr,none\"\
: 0.011997412930703567,\n \"acc,none\": 0.559341755319149,\n \
\ \"acc_stderr,none\": 0.004526251764884398,\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6627321645547648,\n\
\ \"acc_norm_stderr,none\": 0.0057158112089299344,\n \"alias\"\
: \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.904,\n \"acc_norm_stderr,none\": 0.01866896141947719\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6951871657754011,\n\
\ \"acc_norm_stderr,none\": 0.03375289476367582\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.736,\n \"acc_norm_stderr,none\":\
\ 0.027934518957690866\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.724,\n\
\ \"acc_norm_stderr,none\": 0.02832853727421142\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.596,\n \"acc_norm_stderr,none\":\
\ 0.03109668818482536\n },\n \"leaderboard_bbh_hyperbaton\": {\n \
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.756,\n \"acc_norm_stderr,none\": 0.02721799546455311\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.868,\n \"acc_norm_stderr,none\": 0.021450980824038166\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.464,\n\
\ \"acc_norm_stderr,none\": 0.03160397514522374\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.7328767123287672,\n \"acc_norm_stderr,none\"\
: 0.03674407640319397\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\":\
\ 0.02496069198917196\n },\n \"leaderboard_bbh_ruin_names\": {\n \
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.672,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.8707865168539326,\n\
\ \"acc_norm_stderr,none\": 0.02521291917508837\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.728,\n \"acc_norm_stderr,none\":\
\ 0.028200088296309975\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.864,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\":\
\ 0.027513851933031318\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\":\
\ 0.02721799546455311\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.608,\n\
\ \"acc_norm_stderr,none\": 0.030938207620401222\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.39429530201342283,\n\
\ \"acc_norm_stderr,none\": 0.014165615446017703,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3939393939393939,\n \"acc_norm_stderr,none\": 0.03481285338232962\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3791208791208791,\n\
\ \"acc_norm_stderr,none\": 0.02078232480021949\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.41294642857142855,\n \"acc_norm_stderr,none\"\
: 0.023287987691016507\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.3197781885397412,\n \"prompt_level_strict_acc_stderr,none\": 0.02007025155658027,\n\
\ \"inst_level_strict_acc,none\": 0.4292565947242206,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.37153419593345655,\n \"prompt_level_loose_acc_stderr,none\": 0.020794253888707582,\n\
\ \"inst_level_loose_acc,none\": 0.47721822541966424,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.47583081570996977,\n \"exact_match_stderr,none\"\
: 0.011997412930703567,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.752442996742671,\n\
\ \"exact_match_stderr,none\": 0.024672530661985218\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.4959349593495935,\n \"exact_match_stderr,none\": 0.045266376933577414\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.3181818181818182,\n\
\ \"exact_match_stderr,none\": 0.040694556602840146\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.16428571428571428,\n \"exact_match_stderr,none\": 0.02218332855621051\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.577922077922078,\n\
\ \"exact_match_stderr,none\": 0.039928706872358506\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.7150259067357513,\n \"exact_match_stderr,none\"\
: 0.032577140777096614\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.17037037037037037,\n \"exact_match_stderr,none\"\
: 0.032477811859955956\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.559341755319149,\n\
\ \"acc_stderr,none\": 0.004526251764884398\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.48544973544973546,\n \"acc_norm_stderr,none\"\
: 0.017990685826838902,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.596,\n\
\ \"acc_norm_stderr,none\": 0.03109668818482536\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.4375,\n \"acc_norm_stderr,none\"\
: 0.031065632609231775\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ }\n },\n \"leaderboard\": {\n \"inst_level_strict_acc,none\"\
: 0.4292565947242206,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"prompt_level_loose_acc,none\": 0.37153419593345655,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.020794253888707582,\n \"inst_level_loose_acc,none\": 0.4772182254196642,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.6038396679206123,\n \"acc_norm_stderr,none\": 0.005114412741319218,\n\
\ \"prompt_level_strict_acc,none\": 0.3197781885397412,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.02007025155658027,\n \"exact_match,none\": 0.47583081570996977,\n \
\ \"exact_match_stderr,none\": 0.011997412930703567,\n \"acc,none\":\
\ 0.559341755319149,\n \"acc_stderr,none\": 0.004526251764884398,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.6627321645547648,\n \"acc_norm_stderr,none\": 0.0057158112089299344,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.904,\n \"acc_norm_stderr,none\": 0.01866896141947719\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6951871657754011,\n \"acc_norm_stderr,none\"\
: 0.03375289476367582\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.736,\n \"acc_norm_stderr,none\": 0.027934518957690866\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.724,\n \"acc_norm_stderr,none\": 0.02832853727421142\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.756,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.868,\n \"acc_norm_stderr,none\": 0.021450980824038166\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.7328767123287672,\n\
\ \"acc_norm_stderr,none\": 0.03674407640319397\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.672,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.8707865168539326,\n \"acc_norm_stderr,none\"\
: 0.02521291917508837\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.728,\n \"acc_norm_stderr,none\": 0.028200088296309975\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.864,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.39429530201342283,\n\
\ \"acc_norm_stderr,none\": 0.014165615446017703,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3939393939393939,\n\
\ \"acc_norm_stderr,none\": 0.03481285338232962\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3791208791208791,\n \"acc_norm_stderr,none\": 0.02078232480021949\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.41294642857142855,\n \"acc_norm_stderr,none\"\
: 0.023287987691016507\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.3197781885397412,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02007025155658027,\n \
\ \"inst_level_strict_acc,none\": 0.4292565947242206,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.37153419593345655,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.020794253888707582,\n \"inst_level_loose_acc,none\"\
: 0.47721822541966424,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.47583081570996977,\n\
\ \"exact_match_stderr,none\": 0.011997412930703567,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.752442996742671,\n \"exact_match_stderr,none\": 0.024672530661985218\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.4959349593495935,\n \"exact_match_stderr,none\": 0.045266376933577414\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.3181818181818182,\n \"exact_match_stderr,none\"\
: 0.040694556602840146\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.16428571428571428,\n \"exact_match_stderr,none\"\
: 0.02218332855621051\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.577922077922078,\n \"exact_match_stderr,none\": 0.039928706872358506\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.7150259067357513,\n \"exact_match_stderr,none\"\
: 0.032577140777096614\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.17037037037037037,\n \"exact_match_stderr,none\": 0.032477811859955956\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.559341755319149,\n \"acc_stderr,none\": 0.004526251764884398\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.48544973544973546,\n\
\ \"acc_norm_stderr,none\": 0.017990685826838902,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.4375,\n \"acc_norm_stderr,none\": 0.031065632609231775\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Daemontatox/PathfinderAI
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_ifeval
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-39-30.466377.jsonl'
- config_name: Daemontatox__PathfinderAI__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T22_39_30.466377
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-39-30.466377.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-39-30.466377.jsonl'
---
# Dataset Card for Evaluation run of Daemontatox/PathfinderAI
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Daemontatox/PathfinderAI](https://huggingface.co/Daemontatox/PathfinderAI)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/Daemontatox__PathfinderAI-details",
name="Daemontatox__PathfinderAI__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T22-39-30.466377](https://huggingface.co/datasets/open-llm-leaderboard/Daemontatox__PathfinderAI-details/blob/main/Daemontatox__PathfinderAI/results_2024-12-25T22-39-30.466377.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_strict_acc,none": 0.4292565947242206,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.37153419593345655,
"prompt_level_loose_acc_stderr,none": 0.020794253888707582,
"inst_level_loose_acc,none": 0.4772182254196642,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.6038396679206123,
"acc_norm_stderr,none": 0.005114412741319218,
"prompt_level_strict_acc,none": 0.3197781885397412,
"prompt_level_strict_acc_stderr,none": 0.02007025155658027,
"exact_match,none": 0.47583081570996977,
"exact_match_stderr,none": 0.011997412930703567,
"acc,none": 0.559341755319149,
"acc_stderr,none": 0.004526251764884398,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6627321645547648,
"acc_norm_stderr,none": 0.0057158112089299344,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.904,
"acc_norm_stderr,none": 0.01866896141947719
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6951871657754011,
"acc_norm_stderr,none": 0.03375289476367582
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.736,
"acc_norm_stderr,none": 0.027934518957690866
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.724,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.868,
"acc_norm_stderr,none": 0.021450980824038166
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.7328767123287672,
"acc_norm_stderr,none": 0.03674407640319397
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.672,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.8707865168539326,
"acc_norm_stderr,none": 0.02521291917508837
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.728,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.864,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_gpqa": {
"acc_norm,none": 0.39429530201342283,
"acc_norm_stderr,none": 0.014165615446017703,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3939393939393939,
"acc_norm_stderr,none": 0.03481285338232962
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3791208791208791,
"acc_norm_stderr,none": 0.02078232480021949
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.41294642857142855,
"acc_norm_stderr,none": 0.023287987691016507
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3197781885397412,
"prompt_level_strict_acc_stderr,none": 0.02007025155658027,
"inst_level_strict_acc,none": 0.4292565947242206,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.37153419593345655,
"prompt_level_loose_acc_stderr,none": 0.020794253888707582,
"inst_level_loose_acc,none": 0.47721822541966424,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.47583081570996977,
"exact_match_stderr,none": 0.011997412930703567,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.752442996742671,
"exact_match_stderr,none": 0.024672530661985218
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.4959349593495935,
"exact_match_stderr,none": 0.045266376933577414
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.3181818181818182,
"exact_match_stderr,none": 0.040694556602840146
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.16428571428571428,
"exact_match_stderr,none": 0.02218332855621051
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.577922077922078,
"exact_match_stderr,none": 0.039928706872358506
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.7150259067357513,
"exact_match_stderr,none": 0.032577140777096614
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.17037037037037037,
"exact_match_stderr,none": 0.032477811859955956
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.559341755319149,
"acc_stderr,none": 0.004526251764884398
},
"leaderboard_musr": {
"acc_norm,none": 0.48544973544973546,
"acc_norm_stderr,none": 0.017990685826838902,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.4375,
"acc_norm_stderr,none": 0.031065632609231775
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
},
"leaderboard": {
"inst_level_strict_acc,none": 0.4292565947242206,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.37153419593345655,
"prompt_level_loose_acc_stderr,none": 0.020794253888707582,
"inst_level_loose_acc,none": 0.4772182254196642,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.6038396679206123,
"acc_norm_stderr,none": 0.005114412741319218,
"prompt_level_strict_acc,none": 0.3197781885397412,
"prompt_level_strict_acc_stderr,none": 0.02007025155658027,
"exact_match,none": 0.47583081570996977,
"exact_match_stderr,none": 0.011997412930703567,
"acc,none": 0.559341755319149,
"acc_stderr,none": 0.004526251764884398,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6627321645547648,
"acc_norm_stderr,none": 0.0057158112089299344,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.904,
"acc_norm_stderr,none": 0.01866896141947719
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6951871657754011,
"acc_norm_stderr,none": 0.03375289476367582
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.736,
"acc_norm_stderr,none": 0.027934518957690866
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.724,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.868,
"acc_norm_stderr,none": 0.021450980824038166
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.7328767123287672,
"acc_norm_stderr,none": 0.03674407640319397
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.672,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.8707865168539326,
"acc_norm_stderr,none": 0.02521291917508837
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.728,
"acc_norm_stderr,none": 0.028200088296309975
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.864,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_gpqa": {
"acc_norm,none": 0.39429530201342283,
"acc_norm_stderr,none": 0.014165615446017703,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3939393939393939,
"acc_norm_stderr,none": 0.03481285338232962
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3791208791208791,
"acc_norm_stderr,none": 0.02078232480021949
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.41294642857142855,
"acc_norm_stderr,none": 0.023287987691016507
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.3197781885397412,
"prompt_level_strict_acc_stderr,none": 0.02007025155658027,
"inst_level_strict_acc,none": 0.4292565947242206,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.37153419593345655,
"prompt_level_loose_acc_stderr,none": 0.020794253888707582,
"inst_level_loose_acc,none": 0.47721822541966424,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.47583081570996977,
"exact_match_stderr,none": 0.011997412930703567,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.752442996742671,
"exact_match_stderr,none": 0.024672530661985218
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.4959349593495935,
"exact_match_stderr,none": 0.045266376933577414
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.3181818181818182,
"exact_match_stderr,none": 0.040694556602840146
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.16428571428571428,
"exact_match_stderr,none": 0.02218332855621051
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.577922077922078,
"exact_match_stderr,none": 0.039928706872358506
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.7150259067357513,
"exact_match_stderr,none": 0.032577140777096614
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.17037037037037037,
"exact_match_stderr,none": 0.032477811859955956
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.559341755319149,
"acc_stderr,none": 0.004526251764884398
},
"leaderboard_musr": {
"acc_norm,none": 0.48544973544973546,
"acc_norm_stderr,none": 0.017990685826838902,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.4375,
"acc_norm_stderr,none": 0.031065632609231775
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
lomov/29labels_v2 | lomov | "2024-12-25T22:40:26Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-25T22:39:45Z" | ---
license: apache-2.0
---
|
1231czx/llama31_test_chat_format_20k_only_firstwrong_and_regular_first_corr10k_ep3tmp07 | 1231czx | "2024-12-25T22:41:57Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:41:56Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 18847970
num_examples: 5000
download_size: 6420427
dataset_size: 18847970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/Saxo__Linkbricks-Horizon-AI-Superb-27B-details | open-llm-leaderboard | "2024-12-25T22:45:49Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:42:50Z" | ---
pretty_name: Evaluation run of Saxo/Linkbricks-Horizon-AI-Superb-27B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Saxo/Linkbricks-Horizon-AI-Superb-27B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Superb-27B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/Saxo__Linkbricks-Horizon-AI-Superb-27B-details\"\
,\n\tname=\"Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-12-25T22-42-49.895693](https://huggingface.co/datasets/open-llm-leaderboard/Saxo__Linkbricks-Horizon-AI-Superb-27B-details/blob/main/Saxo__Linkbricks-Horizon-AI-Superb-27B/results_2024-12-25T22-42-49.895693.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.7134935304990758,\n \"\
prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n \"prompt_level_strict_acc,none\"\
: 0.6894639556377079,\n \"prompt_level_strict_acc_stderr,none\": 0.019912001290591178,\n\
\ \"acc,none\": 0.406000664893617,\n \"acc_stderr,none\":\
\ 0.004477189622270791,\n \"inst_level_strict_acc,none\": 0.7709832134292566,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.5619405889220391,\n \"acc_norm_stderr,none\": 0.005256728642101159,\n\
\ \"exact_match,none\": 0.15709969788519637,\n \"exact_match_stderr,none\"\
: 0.009342657681072883,\n \"inst_level_loose_acc,none\": 0.7925659472422062,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.6170803679916681,\n \"acc_norm_stderr,none\": 0.005982137660474842,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.892,\n \"acc_norm_stderr,none\": 0.019669559381568776\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6149732620320856,\n\
\ \"acc_norm_stderr,none\": 0.03567936280544673\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.752,\n\
\ \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\":\
\ 0.03056207062099311\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.788,\n \
\ \"acc_norm_stderr,none\": 0.025901884690541117\n },\n \"\
leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\": \" \
\ - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.516,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\": 0.03162125257572558\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.756,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.6095890410958904,\n \"acc_norm_stderr,none\": 0.040513109165891854\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.764,\n \"acc_norm_stderr,none\": 0.026909337594953852\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.752,\n \
\ \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.7584269662921348,\n \"acc_norm_stderr,none\"\
: 0.032173216138332565\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.86,\n\
\ \"acc_norm_stderr,none\": 0.021989409645240245\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\":\
\ 0.02915021337415965\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.38,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.464,\n\
\ \"acc_norm_stderr,none\": 0.03160397514522374\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3573825503355705,\n\
\ \"acc_norm_stderr,none\": 0.013897218993125009,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3686868686868687,\n \"acc_norm_stderr,none\": 0.03437305501980615\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3553113553113553,\n\
\ \"acc_norm_stderr,none\": 0.02050129537631028\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3549107142857143,\n \"acc_norm_stderr,none\"\
: 0.022631623416326775\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.6894639556377079,\n \"prompt_level_strict_acc_stderr,none\": 0.019912001290591178,\n\
\ \"inst_level_strict_acc,none\": 0.7709832134292566,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7134935304990758,\n \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n\
\ \"inst_level_loose_acc,none\": 0.7925659472422062,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.15709969788519637,\n \"exact_match_stderr,none\"\
: 0.009342657681072883,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.3322475570032573,\n\
\ \"exact_match_stderr,none\": 0.026926377345574907\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.10569105691056911,\n \"exact_match_stderr,none\": 0.0278344722877674\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.022727272727272728,\n\
\ \"exact_match_stderr,none\": 0.0130210469090637\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.03214285714285714,\n \"exact_match_stderr,none\": 0.01055955866175321\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.07142857142857142,\n\
\ \"exact_match_stderr,none\": 0.020820824576076338\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.31088082901554404,\n \"exact_match_stderr,none\"\
: 0.03340361906276587\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.07407407407407407,\n \"exact_match_stderr,none\"\
: 0.02262397117709356\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.406000664893617,\n\
\ \"acc_stderr,none\": 0.004477189622270791\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4642857142857143,\n \"acc_norm_stderr,none\"\
: 0.01774886189462961,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.616,\n\
\ \"acc_norm_stderr,none\": 0.030821679117375447\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.375,\n \"acc_norm_stderr,none\":\
\ 0.03031695312954162\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.404,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.7134935304990758,\n \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n\
\ \"prompt_level_strict_acc,none\": 0.6894639556377079,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.019912001290591178,\n \"acc,none\": 0.406000664893617,\n \"acc_stderr,none\"\
: 0.004477189622270791,\n \"inst_level_strict_acc,none\": 0.7709832134292566,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.5619405889220391,\n \"acc_norm_stderr,none\": 0.005256728642101159,\n\
\ \"exact_match,none\": 0.15709969788519637,\n \"exact_match_stderr,none\"\
: 0.009342657681072883,\n \"inst_level_loose_acc,none\": 0.7925659472422062,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6170803679916681,\n\
\ \"acc_norm_stderr,none\": 0.005982137660474842,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.892,\n \"acc_norm_stderr,none\": 0.019669559381568776\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\"\
: 0.752,\n \"acc_norm_stderr,none\": 0.027367497504863593\n },\n \"\
leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.516,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\": 0.03162125257572558\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.756,\n \"acc_norm_stderr,none\": 0.02721799546455311\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.712,\n \"acc_norm_stderr,none\": 0.028697004587398257\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.6095890410958904,\n\
\ \"acc_norm_stderr,none\": 0.040513109165891854\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.764,\n \"acc_norm_stderr,none\": 0.026909337594953852\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.752,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.7584269662921348,\n \"acc_norm_stderr,none\"\
: 0.032173216138332565\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.86,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\": 0.02915021337415965\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.38,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3573825503355705,\n\
\ \"acc_norm_stderr,none\": 0.013897218993125009,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3686868686868687,\n\
\ \"acc_norm_stderr,none\": 0.03437305501980615\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3553113553113553,\n \"acc_norm_stderr,none\": 0.02050129537631028\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3549107142857143,\n \"acc_norm_stderr,none\"\
: 0.022631623416326775\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.6894639556377079,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.019912001290591178,\n \
\ \"inst_level_strict_acc,none\": 0.7709832134292566,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.7134935304990758,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n \"inst_level_loose_acc,none\"\
: 0.7925659472422062,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.15709969788519637,\n\
\ \"exact_match_stderr,none\": 0.009342657681072883,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.3322475570032573,\n \"exact_match_stderr,none\": 0.026926377345574907\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.10569105691056911,\n \"exact_match_stderr,none\": 0.0278344722877674\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.022727272727272728,\n \"exact_match_stderr,none\"\
: 0.0130210469090637\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.03214285714285714,\n \"exact_match_stderr,none\"\
: 0.01055955866175321\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.07142857142857142,\n \"exact_match_stderr,none\": 0.020820824576076338\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.31088082901554404,\n \"exact_match_stderr,none\"\
: 0.03340361906276587\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.07407407407407407,\n \"exact_match_stderr,none\": 0.02262397117709356\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.406000664893617,\n \"acc_stderr,none\": 0.004477189622270791\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4642857142857143,\n\
\ \"acc_norm_stderr,none\": 0.01774886189462961,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.375,\n \"acc_norm_stderr,none\": 0.03031695312954162\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.404,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Superb-27B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_navigate
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_object_counting
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_snarks
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_gpqa_diamond
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_gpqa_extended
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_gpqa_main
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_ifeval
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_math_algebra_hard
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_math_geometry_hard
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_mmlu_pro
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_musr_object_placements
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-12-25T22-42-49.895693.jsonl'
- config_name: Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_musr_team_allocation
data_files:
- split: 2024_12_25T22_42_49.895693
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-42-49.895693.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-12-25T22-42-49.895693.jsonl'
---
# Dataset Card for Evaluation run of Saxo/Linkbricks-Horizon-AI-Superb-27B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Saxo/Linkbricks-Horizon-AI-Superb-27B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Superb-27B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/Saxo__Linkbricks-Horizon-AI-Superb-27B-details",
name="Saxo__Linkbricks-Horizon-AI-Superb-27B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-25T22-42-49.895693](https://huggingface.co/datasets/open-llm-leaderboard/Saxo__Linkbricks-Horizon-AI-Superb-27B-details/blob/main/Saxo__Linkbricks-Horizon-AI-Superb-27B/results_2024-12-25T22-42-49.895693.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.7134935304990758,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"prompt_level_strict_acc,none": 0.6894639556377079,
"prompt_level_strict_acc_stderr,none": 0.019912001290591178,
"acc,none": 0.406000664893617,
"acc_stderr,none": 0.004477189622270791,
"inst_level_strict_acc,none": 0.7709832134292566,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.5619405889220391,
"acc_norm_stderr,none": 0.005256728642101159,
"exact_match,none": 0.15709969788519637,
"exact_match_stderr,none": 0.009342657681072883,
"inst_level_loose_acc,none": 0.7925659472422062,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6170803679916681,
"acc_norm_stderr,none": 0.005982137660474842,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.892,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6095890410958904,
"acc_norm_stderr,none": 0.040513109165891854
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.764,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7584269662921348,
"acc_norm_stderr,none": 0.032173216138332565
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.86,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.38,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3573825503355705,
"acc_norm_stderr,none": 0.013897218993125009,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3686868686868687,
"acc_norm_stderr,none": 0.03437305501980615
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3553113553113553,
"acc_norm_stderr,none": 0.02050129537631028
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3549107142857143,
"acc_norm_stderr,none": 0.022631623416326775
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.6894639556377079,
"prompt_level_strict_acc_stderr,none": 0.019912001290591178,
"inst_level_strict_acc,none": 0.7709832134292566,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7134935304990758,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.7925659472422062,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.15709969788519637,
"exact_match_stderr,none": 0.009342657681072883,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.3322475570032573,
"exact_match_stderr,none": 0.026926377345574907
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.10569105691056911,
"exact_match_stderr,none": 0.0278344722877674
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03214285714285714,
"exact_match_stderr,none": 0.01055955866175321
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.07142857142857142,
"exact_match_stderr,none": 0.020820824576076338
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.31088082901554404,
"exact_match_stderr,none": 0.03340361906276587
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.07407407407407407,
"exact_match_stderr,none": 0.02262397117709356
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.406000664893617,
"acc_stderr,none": 0.004477189622270791
},
"leaderboard_musr": {
"acc_norm,none": 0.4642857142857143,
"acc_norm_stderr,none": 0.01774886189462961,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.375,
"acc_norm_stderr,none": 0.03031695312954162
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.404,
"acc_norm_stderr,none": 0.03109668818482536
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.7134935304990758,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"prompt_level_strict_acc,none": 0.6894639556377079,
"prompt_level_strict_acc_stderr,none": 0.019912001290591178,
"acc,none": 0.406000664893617,
"acc_stderr,none": 0.004477189622270791,
"inst_level_strict_acc,none": 0.7709832134292566,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.5619405889220391,
"acc_norm_stderr,none": 0.005256728642101159,
"exact_match,none": 0.15709969788519637,
"exact_match_stderr,none": 0.009342657681072883,
"inst_level_loose_acc,none": 0.7925659472422062,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6170803679916681,
"acc_norm_stderr,none": 0.005982137660474842,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.892,
"acc_norm_stderr,none": 0.019669559381568776
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.516,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.756,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.712,
"acc_norm_stderr,none": 0.028697004587398257
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.6095890410958904,
"acc_norm_stderr,none": 0.040513109165891854
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.764,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.752,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7584269662921348,
"acc_norm_stderr,none": 0.032173216138332565
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.86,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.38,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3573825503355705,
"acc_norm_stderr,none": 0.013897218993125009,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3686868686868687,
"acc_norm_stderr,none": 0.03437305501980615
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3553113553113553,
"acc_norm_stderr,none": 0.02050129537631028
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3549107142857143,
"acc_norm_stderr,none": 0.022631623416326775
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.6894639556377079,
"prompt_level_strict_acc_stderr,none": 0.019912001290591178,
"inst_level_strict_acc,none": 0.7709832134292566,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7134935304990758,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.7925659472422062,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.15709969788519637,
"exact_match_stderr,none": 0.009342657681072883,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.3322475570032573,
"exact_match_stderr,none": 0.026926377345574907
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.10569105691056911,
"exact_match_stderr,none": 0.0278344722877674
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03214285714285714,
"exact_match_stderr,none": 0.01055955866175321
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.07142857142857142,
"exact_match_stderr,none": 0.020820824576076338
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.31088082901554404,
"exact_match_stderr,none": 0.03340361906276587
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.07407407407407407,
"exact_match_stderr,none": 0.02262397117709356
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.406000664893617,
"acc_stderr,none": 0.004477189622270791
},
"leaderboard_musr": {
"acc_norm,none": 0.4642857142857143,
"acc_norm_stderr,none": 0.01774886189462961,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.375,
"acc_norm_stderr,none": 0.03031695312954162
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.404,
"acc_norm_stderr,none": 0.03109668818482536
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
1231czx/llama31_chat_format_20k_ep3tmp07 | 1231czx | "2024-12-25T22:42:57Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:42:56Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 18375713
num_examples: 5000
download_size: 6315088
dataset_size: 18375713
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama31_test_chat_format_20k_only_firstwrong_and_regular_first_corr10k_ep3tmp0 | 1231czx | "2024-12-25T22:48:52Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:48:51Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 18887181
num_examples: 5000
download_size: 5862719
dataset_size: 18887181
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/llama31_chat_format_20k_ep3tmp0 | 1231czx | "2024-12-25T22:49:44Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:49:42Z" | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 18125956
num_examples: 5000
download_size: 5821588
dataset_size: 18125956
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
metalure/SEER-sft-02 | metalure | "2024-12-26T01:11:13Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:55:07Z" | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
dtype: string
- name: constitutions
list:
- name: answers
sequence: string
- name: content
dtype: string
- name: id
dtype: string
- name: preferred_answer
dtype: int64
splits:
- name: train
num_bytes: 6016095
num_examples: 1000
download_size: 2387527
dataset_size: 6016095
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
maniro-ai/2024-12-25-rod-barn-all-grasp-berry-mjpeg | maniro-ai | "2024-12-25T23:03:00Z" | 0 | 0 | [
"task_categories:robotics",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2024-12-25T22:56:15Z" | ---
task_categories:
- robotics
tags:
- LeRobot
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
|
spiralworks/lg_domain_gen_2015_2020_test_6 | spiralworks | "2024-12-25T23:17:37Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T22:58:51Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: abstract
dtype: string
- name: year
dtype: string
- name: venue
dtype: string
- name: keywords
sequence: string
- name: pdf_url
dtype: string
- name: forum_url
dtype: string
- name: forum_raw_text
sequence: string
- name: reviews_raw_text
sequence: string
- name: average_rating
dtype: float64
- name: average_confidence
dtype: float64
- name: reviews
sequence: string
splits:
- name: train
num_bytes: 96432221
num_examples: 2506
download_size: 47960225
dataset_size: 96432221
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
UniDataPro/water-meters | UniDataPro | "2024-12-25T23:10:22Z" | 0 | 1 | [
"task_categories:image-segmentation",
"license:cc-by-nc-nd-4.0",
"size_categories:1K<n<10K",
"modality:image",
"region:us",
"water meter",
"smart city",
"computer vision",
"image",
"communal service"
] | [
"image-segmentation"
] | "2024-12-25T23:06:11Z" | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
tags:
- water meter
- smart city
- computer vision
- image
- communal service
size_categories:
- 1K<n<10K
---
# Water Meters Dataset - OCR, Masks
Dataset comprises **5,000+** images of water meters, including their corresponding **segmentation masks** and** OCR labels** for meter readings. It is designed to facilitate research in **water consumption analysis** and **smart meter technology**, providing valuable insights into residential and commercial water usage.
By utilizing this dataset, researchers and developers can enhance their understanding of water meter readings and improve methods for **automatic recognition** and data collection. - **[Get the data](https://unidata.pro/datasets/water-meters/?utm_source=huggingface&utm_medium=cpc&utm_campaign=water-meters)**
# Example of the data
![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F7580bad9a4e171194a942954cc469a3e%2FFrame%20185.png?generation=1735167809138042&alt=media)
Each image in the dataset contains critical information such as the value of the water meter and the location of the bounding box, enabling accurate meter reading extraction. The dataset is organized into three distinct folders: bounding box images, original images, and segmentation masks, allowing for efficient data management and accessibility.
# 💵 Buy the Dataset: This is a limited preview of the data. To access the full dataset, please contact us at [https://unidata.pro](https://unidata.pro/datasets/water-meters/?utm_source=huggingface&utm_medium=cpc&utm_campaign=water-meters) to discuss your requirements and pricing options.
This dataset provides a solid foundation for developing advanced recognition algorithms and improving the accuracy of water meter reading systems, which are essential for effective water resource management and conservation efforts.
# 🌐 [UniData](https://unidata.pro/datasets/water-meters/?utm_source=huggingface&utm_medium=cpc&utm_campaign=water-meters) provides high-quality datasets, content moderation, data collection and annotation for your AI/ML projects |
AhmedBadawy11/UAE_instructions_responses | AhmedBadawy11 | "2024-12-25T23:09:17Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T23:09:16Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 30843
num_examples: 100
download_size: 17868
dataset_size: 30843
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettavuw/D_gen3_run2_llama2-7b_sciabs_doc1000_real96_synt32_vuw | dgambettavuw | "2024-12-25T23:09:47Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T23:09:43Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 794952
num_examples: 1000
download_size: 421633
dataset_size: 794952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
meteorinc/yesbro1 | meteorinc | "2024-12-25T23:24:57Z" | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | "2024-12-25T23:24:49Z" | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "koch",
"total_episodes": 5,
"total_frames": 2405,
"total_tasks": 1,
"total_videos": 10,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
winglian/evolkit-logprobs-prepared-kd-temp-2_0-context-8k | winglian | "2024-12-25T23:57:04Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T23:28:34Z" | ---
configs:
- config_name: subset_0
data_files:
- split: train
path: "data/subset_0/*.arrow"
- config_name: subset_1
data_files:
- split: train
path: "data/subset_1/*.arrow"
- config_name: subset_2
data_files:
- split: train
path: "data/subset_2/*.arrow"
- config_name: subset_3
data_files:
- split: train
path: "data/subset_3/*.arrow"
- config_name: subset_4
data_files:
- split: train
path: "data/subset_4/*.arrow"
---
|
paralaif/prl_crimson_maiden_style | paralaif | "2024-12-25T23:34:22Z" | 0 | 0 | [
"language:en",
"license:cc0-1.0",
"modality:image",
"region:us",
"image"
] | null | "2024-12-25T23:31:52Z" | ---
language:
- en
pretty_name: "Crimson Maiden Lora Training Data"
tags:
- image
license: "cc0-1.0"
---
# prl_crimson_maiden_style Dataset
This dataset is used to train my lora Crimson Maiden from Civitai. |
qfq/train_all_features | qfq | "2024-12-25T23:37:56Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T23:35:10Z" | ---
dataset_info:
features:
- name: solution
dtype: string
- name: question
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: cot
dtype: 'null'
- name: isqwen32bcorrect
dtype: bool
- name: isgenminicorrect
dtype: bool
- name: thinking_trajectories
sequence: string
- name: attempt
dtype: string
- name: thinking_tokens
dtype: int64
- name: thinking_tokens_rounded
dtype: int64
- name: answer_chars
dtype: int64
- name: answer_tokens
dtype: int64
- name: answer_tokens_rounded
dtype: int64
- name: ratio_thinking_answer_tokens
dtype: float64
splits:
- name: train
num_bytes: 9026942676
num_examples: 58137
download_size: 5820956367
dataset_size: 9026942676
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qfq/train_all_features_longest_correct | qfq | "2024-12-25T23:44:21Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T23:42:09Z" | ---
dataset_info:
features:
- name: solution
dtype: string
- name: question
dtype: string
- name: cot_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
- name: cot
dtype: 'null'
- name: isqwen32bcorrect
dtype: bool
- name: isgenminicorrect
dtype: bool
- name: thinking_trajectories
sequence: string
- name: attempt
dtype: string
- name: thinking_tokens
dtype: int64
- name: thinking_tokens_rounded
dtype: int64
- name: answer_chars
dtype: int64
- name: answer_tokens
dtype: int64
- name: answer_tokens_rounded
dtype: int64
- name: ratio_thinking_answer_tokens
dtype: float64
splits:
- name: train
num_bytes: 155270183.807214
num_examples: 1000
download_size: 7574894
dataset_size: 155270183.807214
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
```
{'AI-MO/NuminaMath-CoT/aops_forum': 620, 'KbsdJames/Omni-MATH': 149, 'qfq/openaimath/Precalculus': 76, 'qfq/openaimath/Intermediate Algebra': 49, 'qfq/openaimath/Geometry': 27, 'qfq/openaimath/Number Theory': 17, 'qfq/openaimath/Counting & Probability': 15, 'daman1209arora/jeebench/math': 8, 'GAIR/OlympicArena/Math': 8, 'qfq/openaimath/Prealgebra': 6, 'AI-MO/NuminaMath-CoT/olympiads': 6, 'Hothan/OlympiadBench/Theorem proof/Math': 4, 'Hothan/OlympiadBench/Open-ended/Physics': 3, 'qfq/stats_qual': 3, 'TIGER-Lab/TheoremQA/integer': 2, 'baber/agieval/aqua_rat': 1, 'AI-MO/NuminaMath-CoT/cn_k12': 1, 'TIGER-Lab/TheoremQA/list of integer': 1, 'Hothan/OlympiadBench/Open-ended/Math': 1, 'Hothan/OlympiadBench/Theorem proof/Physics': 1, 'TIGER-Lab/TheoremQA/float': 1, 'qfq/openaimath/Algebra': 1}
``` |
gghozzit/20-quran-test | gghozzit | "2024-12-25T23:48:31Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T23:45:45Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4872553
num_examples: 5148
download_size: 2401032
dataset_size: 4872553
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettavuw/D_gen4_run2_llama2-7b_sciabs_doc1000_real96_synt32_vuw | dgambettavuw | "2024-12-25T23:49:54Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T23:49:50Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 787746
num_examples: 1000
download_size: 414254
dataset_size: 787746
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
eja/wikilite | eja | "2024-12-26T00:22:11Z" | 0 | 0 | [
"language:it",
"language:es",
"language:sc",
"license:gfdl",
"region:us",
"sqlite",
"wikipedia",
"wikilite",
"eja"
] | null | "2024-12-25T23:54:29Z" | ---
license: gfdl
language:
- it
- es
- sc
tags:
- sqlite
- wikipedia
- wikilite
- eja
pretty_name: wikilite
---
# Processed Wikipedia SQLite Databases for Wikilite
This dataset provides pre-processed SQLite databases of Wikipedia articles for use with the [Wikilite](https://github.com/eja/wikilite) tool. These databases allow you to quickly and efficiently search and access Wikipedia content offline using Wikilite's lexical and semantic search capabilities.
## Supported Languages
Currently, the dataset includes databases for the following languages:
* **Sardinian (sc)**
* **Italian (it)**
* **Spanish (es)**
More languages may be added in the future.
## Dataset Structure
Each language is stored as a separate compressed file (`.db.gz`) within the dataset. For example:
* `it.db.gz` - Italian Wikipedia database
* `sc.db.gz` - Sardinian Wikipedia database
* `es.db.gz` - Spanish Wikipedia database
## How to Use this Dataset
1. **Download the Desired Database:** Choose the database for the language you want to use and download the corresponding `.db.gz` file.
2. **Decompress the Database:** Use a tool like `gunzip` to decompress the downloaded file. For example, on Linux or macOS, you can run the following command in your terminal:
```bash
gunzip it.db.gz
```
This will create the decompressed database file (`it.db` in the example above).
3. **Install Wikilite**: Follow the instructions on the [Wikilite github repo](https://github.com/eja/wikilite) to clone the repository and build the binary.
4. **Run Wikilite:** Navigate to the directory where you extracted the database and where you have the compiled `wikilite` binary. Use the `wikilite` command with the appropriate options. For example, to start the web interface for the Italian database, use:
```bash
./wikilite --db it.db --web
```
This will start a local web server allowing you to browse and search the Wikipedia content.
**Command-line Usage:** Alternatively, you can search the database directly from the command line:
```bash
./wikilite --db it.db --cli
```
5. **Access the Web Interface:** If you started the web server, open a web browser and navigate to `http://localhost:35248` to access the web interface.
## About Wikilite
[Wikilite](https://github.com/eja/wikilite) is a tool that provides offline access to Wikipedia content, featuring:
* **Fast and Flexible Lexical Searching:** Uses FTS5 (Full-Text Search 5) for efficient keyword-based searching.
* **Enhanced Semantic Search:** Integrates with Qdrant (optional) for semantic search capabilities, allowing you to find information based on meaning rather than just keywords.
* **Offline Access:** Enables access to Wikipedia articles without an internet connection.
* **Command-Line Interface (CLI):** Allows direct searching from the terminal.
* **Web Interface (Optional):** Provides a user-friendly way to browse and search content.
### Semantic Search Details
Wikilite leverages text embeddings for its optional semantic search. This allows you to find results even if your query does not match keywords directly, handling cases like:
* Typos in your search query.
* Different wordings to express the same concept.
* The article uses synonyms or related terms.
**Note:** To enable semantic search, you'll need to have a running Qdrant instance and configure Wikilite accordingly. See the Wikilite GitHub repository for more details.
## Contributing
If you would like to contribute databases for additional languages, please feel free to submit a pull request.
## Acknowledgments
* [Wikipedia](https://www.wikipedia.org/): For providing the valuable data.
* [SQLite](https://www.sqlite.org/): For the robust database engine.
* [Qdrant](https://qdrant.tech/): For the high-performance vector database used in semantic search.
* [Wikilite](https://github.com/eja/wikilite): For making this project possible. |
BuiMinh/final | BuiMinh | "2024-12-25T23:58:41Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-25T23:54:34Z" | ---
dataset_info:
features:
- name: timestamp
dtype: string
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
- name: MA_20
dtype: float64
- name: MA_50
dtype: float64
- name: MA_200
dtype: float64
- name: RSI
dtype: float64
- name: '%K'
dtype: float64
- name: '%D'
dtype: float64
- name: ADX
dtype: float64
- name: ATR
dtype: float64
- name: BB_Upper
dtype: float64
- name: BB_Lower
dtype: float64
- name: MACD
dtype: float64
- name: Signal
dtype: float64
- name: Histogram
dtype: float64
- name: Trendline
dtype: float64
- name: NW_Upper
dtype: float64
- name: NW_Lower
dtype: float64
splits:
- name: train
num_bytes: 3969944
num_examples: 20152
download_size: 3857095
dataset_size: 3969944
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
endomorphosis/LunarLake_Crash_MemDump | endomorphosis | "2024-12-25T23:57:33Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-25T23:56:45Z" | ---
license: apache-2.0
---
|
BuiMinh/finalfinal | BuiMinh | "2024-12-26T00:07:55Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:02:22Z" | ---
dataset_info:
features:
- name: timestamp
dtype: string
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
- name: MA_20
dtype: float64
- name: MA_50
dtype: float64
- name: MA_200
dtype: float64
- name: RSI
dtype: float64
- name: '%K'
dtype: float64
- name: '%D'
dtype: float64
- name: ADX
dtype: float64
- name: ATR
dtype: float64
- name: BB_Upper
dtype: float64
- name: BB_Lower
dtype: float64
- name: MACD
dtype: float64
- name: Signal
dtype: float64
- name: Histogram
dtype: float64
- name: Trendline
dtype: float64
- name: NW_Upper
dtype: float64
- name: NW_Lower
dtype: float64
splits:
- name: train
num_bytes: 2581685
num_examples: 13105
download_size: 2512825
dataset_size: 2581685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jean-Marie-METRO/Francais | Jean-Marie-METRO | "2024-12-26T00:17:31Z" | 0 | 0 | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2024-12-26T00:17:31Z" | ---
license: bigscience-bloom-rail-1.0
---
|
BuiMinh/finalfinal2 | BuiMinh | "2024-12-26T00:19:53Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:19:09Z" | ---
dataset_info:
features:
- name: timestamp
dtype: string
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
- name: MA_20
dtype: float64
- name: MA_50
dtype: float64
- name: MA_200
dtype: float64
- name: RSI
dtype: float64
- name: '%K'
dtype: float64
- name: '%D'
dtype: float64
- name: ADX
dtype: float64
- name: ATR
dtype: float64
- name: BB_Upper
dtype: float64
- name: BB_Lower
dtype: float64
- name: MACD
dtype: float64
- name: Signal
dtype: float64
- name: Histogram
dtype: float64
- name: Trendline
dtype: float64
- name: NW_Upper
dtype: float64
- name: NW_Lower
dtype: float64
splits:
- name: train
num_bytes: 2509358
num_examples: 13138
download_size: 2521200
dataset_size: 2509358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/Math-steptok-mc-relabeled | Asap7772 | "2024-12-26T00:38:26Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:19:47Z" | ---
dataset_info:
features:
- name: rollout_responses
sequence: string
- name: rollout_answers
sequence: string
- name: rollout_grades
sequence: bool
- name: rollout_mc_value
dtype: float64
- name: responses
dtype: string
- name: answers
dtype: string
- name: grades
dtype: bool
- name: problem
dtype: string
- name: gt_answer
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: steps
dtype: string
- name: step_idx
dtype: int64
splits:
- name: train
num_bytes: 49046876782
num_examples: 2607411
- name: test
num_bytes: 2119863097
num_examples: 110431
download_size: 11351575514
dataset_size: 51166739879
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ysenarath/wikipedia-20240901-dpr | ysenarath | "2024-12-26T00:57:27Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:20:00Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: passage
dtype: string
splits:
- name: train
num_bytes: 44530490513
num_examples: 6874942
download_size: 25594678753
dataset_size: 44530490513
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZhangShenao/MATH_filtered | ZhangShenao | "2024-12-26T00:20:15Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:20:11Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 5983176.0608
num_examples: 7498
download_size: 2992430
dataset_size: 5983176.0608
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BuiMinh/finalfinal3 | BuiMinh | "2024-12-26T00:23:52Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:23:31Z" | ---
dataset_info:
features:
- name: timestamp
dtype: string
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
- name: MA_20
dtype: float64
- name: MA_50
dtype: float64
- name: MA_200
dtype: float64
- name: RSI
dtype: float64
- name: '%K'
dtype: float64
- name: '%D'
dtype: float64
- name: ADX
dtype: float64
- name: ATR
dtype: float64
- name: BB_Upper
dtype: float64
- name: BB_Lower
dtype: float64
- name: MACD
dtype: float64
- name: Signal
dtype: float64
- name: Histogram
dtype: float64
- name: Trendline
dtype: float64
- name: NW_Upper
dtype: float64
- name: NW_Lower
dtype: float64
splits:
- name: train
num_bytes: 2510122
num_examples: 13142
download_size: 2521833
dataset_size: 2510122
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettavuw/D_gen5_run2_llama2-7b_sciabs_doc1000_real96_synt32_vuw | dgambettavuw | "2024-12-26T00:30:14Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:30:10Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 775561
num_examples: 1000
download_size: 408931
dataset_size: 775561
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yilu1015/hub-test | yilu1015 | "2024-12-26T00:51:23Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:50:25Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: "metadata.csv"
--- |
weaviate/kaggle-stroke-patients-with-description-embeddings | weaviate | "2024-12-26T00:51:02Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-26T00:50:26Z" | ---
license: apache-2.0
---
|
lyl472324464/point_50000 | lyl472324464 | "2024-12-26T01:02:10Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T00:54:30Z" | ---
dataset_info:
features:
- name: image_url
dtype: string
- name: image_sha256
dtype: string
- name: points
list:
- name: x
dtype: float64
- name: y
dtype: float64
- name: count
dtype: int64
- name: label
dtype: string
- name: collection_method
dtype: string
- name: image_path
dtype: string
- name: image_data
dtype: binary
splits:
- name: train
num_bytes: 6783690831
num_examples: 25340
download_size: 2146664500
dataset_size: 6783690831
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettavuw/D_gen6_run2_llama2-7b_sciabs_doc1000_real96_synt32_vuw | dgambettavuw | "2024-12-26T01:09:35Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T01:09:32Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 765546
num_examples: 1000
download_size: 405638
dataset_size: 765546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
spiralworks/test_wildcard_ds | spiralworks | "2024-12-26T01:28:16Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T01:09:48Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: abstract
dtype: string
- name: year
dtype: string
- name: venue
dtype: string
- name: keywords
sequence: string
- name: pdf_url
dtype: string
- name: forum_url
dtype: string
- name: forum_raw_text
sequence: string
- name: reviews_raw_text
sequence: string
- name: average_rating
dtype: float64
- name: average_confidence
dtype: float64
- name: reviews
sequence: string
splits:
- name: train
num_bytes: 274765139
num_examples: 7613
download_size: 136725026
dataset_size: 274765139
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gupta-tanish/Ultrafeedback-Mistral-Instruct | gupta-tanish | "2024-12-26T01:17:12Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-26T01:16:04Z" | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: id
dtype: int64
- name: A0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A1
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A2
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A3
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A4
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A5
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A6
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A7
list:
- name: content
dtype: string
- name: role
dtype: string
- name: A8
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_A0
dtype: float64
- name: score_A1
dtype: float64
- name: score_A2
dtype: float64
- name: score_A3
dtype: float64
- name: score_A4
dtype: float64
- name: score_A5
dtype: float64
- name: score_A6
dtype: float64
- name: score_A7
dtype: float64
- name: score_A8
dtype: float64
splits:
- name: train_prefs
num_bytes: 1479450683
num_examples: 60931
- name: train_sft
num_bytes: 1479450683
num_examples: 60931
- name: test_prefs
num_bytes: 23828053
num_examples: 1000
- name: test_sft
num_bytes: 11745582
num_examples: 500
- name: train_gen
num_bytes: 1381249004
num_examples: 60931
- name: test_gen
num_bytes: 10942625
num_examples: 500
download_size: 2372194996
dataset_size: 4386666630
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
- split: train_sft
path: data/train_sft-*
- split: test_prefs
path: data/test_prefs-*
- split: test_sft
path: data/test_sft-*
- split: train_gen
path: data/train_gen-*
- split: test_gen
path: data/test_gen-*
---
|
NOERROR/PERFECT_SFT_SARDINIAN_DATASET | NOERROR | "2024-12-26T01:18:08Z" | 0 | 0 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-12-26T01:18:08Z" | Invalid username or password. |
NOERROR/PERFECT_SARDINIAN_SFT_DATASET | NOERROR | "2024-12-26T01:22:03Z" | 0 | 0 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-12-26T01:19:10Z" | ---
license: cc-by-nc-sa-4.0
---
|
neonblurr/test | neonblurr | "2024-12-26T01:20:18Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-12-26T01:20:17Z" | ---
license: mit
---
|