datasetId
stringlengths 2
117
| card
stringlengths 19
1.01M
|
---|---|
DavidMOBrien/8000-java | ---
dataset_info:
features:
- name: before
dtype: string
- name: after
dtype: string
- name: repo
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 722488653.5318879
num_examples: 441596
- name: test
num_bytes: 90311899.73405604
num_examples: 55200
- name: valid
num_bytes: 90311899.73405604
num_examples: 55200
download_size: 323537982
dataset_size: 903112452.9999999
---
# Dataset Card for "8000-java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reach-vb/mls-eng-10k-repunct-test-v7 | ---
dataset_info:
features:
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: transcript
dtype: string
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: book_id
dtype: string
- name: repunct_text
dtype: string
splits:
- name: dev
num_bytes: 2202552
num_examples: 3807
download_size: 1220861
dataset_size: 2202552
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
|
sled-umich/SDN | ---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
- text-generation
language:
- en
size_categories:
- 1K<n<10K
---
# DOROTHIE
## Spoken Dialogue for Handling Unexpected Situations in Interactive Autonomous Driving Agents
**[Research Paper](https://arxiv.org/abs/2210.12511) | [Github](https://github.com/sled-group/DOROTHIE) | [Huggingface](https://huggingface.co/datasets/sled-umich/DOROTHIE)**
Authored by [Ziqiao Ma](https://mars-tin.github.io/), Ben VanDerPloeg, Cristian-Paul Bara, [Yidong Huang](https://sled.eecs.umich.edu/author/yidong-huang/), Eui-In Kim, Felix Gervits, Matthew Marge, [Joyce Chai](https://web.eecs.umich.edu/~chaijy/)
DOROTHIE (Dialogue On the ROad To Handle Irregular Events) is an innovative interactive simulation platform designed to create unexpected scenarios on the fly. This tool facilitates empirical studies on situated communication with autonomous driving agents.
![DOROTHIE](media/DOROTHIE.jpg)
This dataset is the pure dialogue dataset, if you want to see the whole simulation process and download the full dataset, please visit our [Github homepage](https://github.com/sled-group/DOROTHIE) |
baptistecolle/sam-controlnet-final-test | ---
dataset_info:
features:
- name: conditioning_image
dtype: image
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: cocoid
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 62596425.0
num_examples: 200
download_size: 62532095
dataset_size: 62596425.0
---
# Dataset Card for "sam-controlnet-final-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
quocanh34/result_with_w2v2_baseline_aug | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
- name: w2v2_baseline_transcription
dtype: string
- name: w2v2_baseline_norm
dtype: string
splits:
- name: train
num_bytes: 174371756.027
num_examples: 1299
download_size: 164200794
dataset_size: 174371756.027
---
# Dataset Card for "result_with_w2v2_baseline_aug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Christa27/docvqa_mini_subset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: query
struct:
- name: de
dtype: string
- name: en
dtype: string
- name: es
dtype: string
- name: fr
dtype: string
- name: it
dtype: string
- name: answers
sequence: string
- name: words
sequence: string
- name: bounding_boxes
sequence:
sequence: float32
length: 4
- name: answer
struct:
- name: match_score
dtype: float64
- name: matched_text
dtype: string
- name: start
dtype: int64
- name: text
dtype: string
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 33133182.0
num_examples: 100
- name: test
num_bytes: 6103054.0
num_examples: 20
download_size: 0
dataset_size: 39236236.0
---
# Dataset Card for "docvqa_mini_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_hyunseoki__ko-en-llama2-13b | ---
pretty_name: Evaluation run of hyunseoki/ko-en-llama2-13b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_hyunseoki__ko-en-llama2-13b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-27T07:23:26.353656](https://huggingface.co/datasets/open-llm-leaderboard/details_hyunseoki__ko-en-llama2-13b/blob/main/results_2023-10-27T07-23-26.353656.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.28114513422818793,\n\
\ \"em_stderr\": 0.004603896433799628,\n \"f1\": 0.3260591442953026,\n\
\ \"f1_stderr\": 0.004539391567050269,\n \"acc\": 0.3779028263381469,\n\
\ \"acc_stderr\": 0.007293885306168497\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.28114513422818793,\n \"em_stderr\": 0.004603896433799628,\n\
\ \"f1\": 0.3260591442953026,\n \"f1_stderr\": 0.004539391567050269\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0075815011372251705,\n \
\ \"acc_stderr\": 0.002389281512077218\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7482241515390686,\n \"acc_stderr\": 0.012198489100259776\n\
\ }\n}\n```"
repo_url: https://huggingface.co/hyunseoki/ko-en-llama2-13b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|arc:challenge|25_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_27T07_23_26.353656
path:
- '**/details_harness|drop|3_2023-10-27T07-23-26.353656.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-27T07-23-26.353656.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_27T07_23_26.353656
path:
- '**/details_harness|gsm8k|5_2023-10-27T07-23-26.353656.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-27T07-23-26.353656.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hellaswag|10_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T07-33-17.210034.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T07-33-17.210034.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T07-33-17.210034.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_27T07_23_26.353656
path:
- '**/details_harness|winogrande|5_2023-10-27T07-23-26.353656.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-27T07-23-26.353656.parquet'
- config_name: results
data_files:
- split: 2023_10_04T07_33_17.210034
path:
- results_2023-10-04T07-33-17.210034.parquet
- split: 2023_10_27T07_23_26.353656
path:
- results_2023-10-27T07-23-26.353656.parquet
- split: latest
path:
- results_2023-10-27T07-23-26.353656.parquet
---
# Dataset Card for Evaluation run of hyunseoki/ko-en-llama2-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/hyunseoki/ko-en-llama2-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_hyunseoki__ko-en-llama2-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T07:23:26.353656](https://huggingface.co/datasets/open-llm-leaderboard/details_hyunseoki__ko-en-llama2-13b/blob/main/results_2023-10-27T07-23-26.353656.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.28114513422818793,
"em_stderr": 0.004603896433799628,
"f1": 0.3260591442953026,
"f1_stderr": 0.004539391567050269,
"acc": 0.3779028263381469,
"acc_stderr": 0.007293885306168497
},
"harness|drop|3": {
"em": 0.28114513422818793,
"em_stderr": 0.004603896433799628,
"f1": 0.3260591442953026,
"f1_stderr": 0.004539391567050269
},
"harness|gsm8k|5": {
"acc": 0.0075815011372251705,
"acc_stderr": 0.002389281512077218
},
"harness|winogrande|5": {
"acc": 0.7482241515390686,
"acc_stderr": 0.012198489100259776
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Nexdata/Multi-race_Driver_Behavior_Collection_Data | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Multi-race_Driver_Behavior_Collection_Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1075?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
304 People Multi-race - Driver Behavior Collection Data. The data includes multiple ages, multiple time periods and multiple races (Caucasian, Black, Indian). The driver behaviors includes dangerous behavior, fatigue behavior and visual movement behavior. In terms of device, binocular cameras of RGB and infrared channels were applied. This data can be used for tasks such as driver behavior analysis.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1075?source=Huggingface
### Supported Tasks and Leaderboards
face-detection, computer-vision, object-detection: The dataset can be used to train a model for face detection.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
dzjxzyd/rhea_uniprot_reaction_large | ---
license: apache-2.0
---
each reaction is designated with three difference enzymes |
PhaniManda/autotrain-data-test-token-classification | ---
language:
- en
task_categories:
- token-classification
---
# AutoTrain Dataset for project: test-token-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test-token-classification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"I",
"will",
"be",
"traveling",
"to",
"Tokyo",
"next",
"month."
],
"tags": [
13,
13,
13,
13,
13,
1,
0,
5
]
},
{
"tokens": [
"The",
"company",
"Apple",
"Inc.",
"is",
"based",
"in",
"California."
],
"tags": [
13,
13,
3,
9,
13,
13,
13,
1
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['B-DATE', 'B-LOC', 'B-MISC', 'B-ORG', 'B-PER', 'I-DATE', 'I-DATE,', 'I-LOC', 'I-MISC', 'I-ORG', 'I-ORG,', 'I-PER', 'I-PER,', 'O'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 21 |
| valid | 9 |
|
Falah/2M_arabic_female_SDXL_refiner_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1050712157
num_examples: 2000000
download_size: 104487990
dataset_size: 1050712157
---
# Dataset Card for "2M_arabic_female_SDXL_refiner_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MikeGreen2710/aux_v1444_test_split | ---
dataset_info:
features:
- name: Word
dtype: string
- name: Tag
dtype: string
- name: 'Sentence #'
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 11741524
num_examples: 354320
download_size: 3837772
dataset_size: 11741524
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sebascorreia/audio-dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: audio_file
dtype: string
- name: slice
dtype: int16
splits:
- name: train
num_bytes: 65293798.5
num_examples: 1492
download_size: 0
dataset_size: 65293798.5
---
# Dataset Card for "audio-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jlbaker361/flickr_humans_mini | ---
dataset_info:
features:
- name: image
dtype: image
- name: split
dtype: string
- name: src
dtype: string
- name: style
dtype: string
splits:
- name: train
num_bytes: 4043768.0
num_examples: 10
download_size: 4046080
dataset_size: 4043768.0
---
# Dataset Card for "flickr_humans_mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SimonSun/train_0.5M_CN_llama2 | ---
language:
- zh
license: openrail
size_categories:
- 100K<n<1M
task_categories:
- text-generation
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1853131088
num_examples: 519255
download_size: 489561814
dataset_size: 1853131088
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
saibo/wiki-nre | ---
language:
- en
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: int64
- name: triplets
list:
- name: object
struct:
- name: surfaceform
dtype: string
- name: uri
dtype: string
- name: predicate
struct:
- name: surfaceform
dtype: string
- name: uri
dtype: string
- name: subject
struct:
- name: surfaceform
dtype: string
- name: uri
dtype: string
- name: entities
list:
- name: surfaceform
dtype: string
- name: uri
dtype: string
- name: relations
list:
- name: surfaceform
dtype: string
- name: uri
dtype: string
- name: linearized_fully_expanded
dtype: string
- name: linearized_subject_collapsed
dtype: string
splits:
- name: train
num_bytes: 117206023
num_examples: 223538
- name: test
num_bytes: 15597162
num_examples: 29620
- name: stratified_test_1K
num_bytes: 608393
num_examples: 1000
- name: val
num_bytes: 522524
num_examples: 980
download_size: 61105204
dataset_size: 133934102
tags:
- wikipedia
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: stratified_test_1K
path: data/stratified_test_1K-*
- split: val
path: data/val-*
---
# Dataset Card for "wiki-nre"
## Feature
The Wiki-NRE dataset displays a significant skew in its relations distribution: the top 10 relations constitute 92\% of the triplets, with the top 3 alone accounting for 69\%.
We have created `stratified_test_1K` whcih was downscaled from test set to 1,000 samples with balanced distribution of relations
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fce0cfeb3dbf216ad31836a/G5niCayvz28i_-O3TKYHf.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fce0cfeb3dbf216ad31836a/tm7VEUM1DHwsll4JwixWn.png)
## Catalog[Optional]
A corresponding catalog(a list of subset of entities and relations) can be found here: https://huggingface.co/datasets/saibo/wikinre_catalog
## Source
```bibtex
@inproceedings{trisedya-etal-2019-neural,
title = "Neural Relation Extraction for Knowledge Base Enrichment",
author = "Trisedya, Bayu Distiawan and
Weikum, Gerhard and
Qi, Jianzhong and
Zhang, Rui",
editor = "Korhonen, Anna and
Traum, David and
M{\`a}rquez, Llu{\'\i}s",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1023",
doi = "10.18653/v1/P19-1023",
pages = "229--240",
abstract = "We study relation extraction for knowledge base (KB) enrichment. Specifically, we aim to extract entities and their relationships from sentences in the form of triples and map the elements of the extracted triples to an existing KB in an end-to-end manner. Previous studies focus on the extraction itself and rely on Named Entity Disambiguation (NED) to map triples into the KB space. This way, NED errors may cause extraction errors that affect the overall precision and recall. To address this problem, we propose an end-to-end relation extraction model for KB enrichment based on a neural encoder-decoder model. We collect high-quality training data by distant supervision with co-reference resolution and paraphrase detection. We propose an n-gram based attention model that captures multi-word entity names in a sentence. Our model employs jointly learned word and entity embeddings to support named entity disambiguation. Finally, our model uses a modified beam search and a triple classifier to help generate high-quality triples. Our model outperforms state-of-the-art baselines by 15.51{\%} and 8.38{\%} in terms of F1 score on two real-world datasets.",
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anderson6161/panelinha | ---
license: openrail
---
|
naorm/all-captions-blip2-quant | ---
dataset_info:
features:
- name: image
dtype: image
- name: hf-blip2-16bit
dtype: string
- name: hf-blip2-8bit
dtype: string
- name: hf-blip2-coco-16bit
dtype: string
- name: hf-blip2-coco-8bit
dtype: string
splits:
- name: train
num_bytes: 812674518.0
num_examples: 5000
download_size: 813755578
dataset_size: 812674518.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
carolmou/random-sentences | ---
dataset_info:
features:
- name: wrong_text
dtype: string
- name: correct_text
dtype: string
splits:
- name: train
num_bytes: 16484766
num_examples: 231224
- name: test
num_bytes: 3014605
num_examples: 39634
download_size: 16373149
dataset_size: 19499371
---
# Dataset Card for "random-sentences"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pharaouk/biology_dataset_standardized_cluster_16 | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biology_dataset_standardized_cluster_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dendory/tarot | ---
language:
- en
pretty_name: "Tarot cards readings"
tags:
- ChatGPT
- Tarot
license: mit
task_categories:
- question-answering
- text-generation
---
This is a dataset of 5,770 high quality tarot cards readings produced by ChatGPT based on 3 randomly drawn cards. It can be used to train smaller models for use in a tarot application.
The prompt used to produce these readings was:
> Give me a one paragraph tarot reading if I pull the cards CARD1, CARD2 and CARD3.\n\nReading:\n
The CSV dataset contains the following columns: *Card 1*, *Card 2*, *Card 3*, *Reading*
There are also 2 Python scripts included:
* make_dataset.py: This file was used to create the dataset using the ChatGPT API.
* train_dataset.py: This file can be used as an example on how to train a base model using the dataset. |
result-kand2-sdxl-wuerst-karlo/00dbfb2c | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 240
num_examples: 10
download_size: 1450
dataset_size: 240
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "00dbfb2c"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
edbeeching/prj_gia_dataset_atari_2B_atari_breakout_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_breakout environment, sample for the policy atari_2B_atari_breakout_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
diwank/goat-deduped | ---
dataset_info:
features:
- name: output
dtype: string
- name: answer
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: signature
dtype: string
splits:
- name: train
num_bytes: 740545
num_examples: 6652
download_size: 0
dataset_size: 740545
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "goat-deduped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
somosnlp/spa_climate_detection | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- es
pretty_name: Pa
tags:
- climate
---
## Resumen:
El siguiente dataset es una fusión de diferentes fuentes (open-source), que incluye:
- Traducción al español del dataset: https://huggingface.co/datasets/climatebert/climate_detection
- Noticias en español de temas no relacionados al cambio climatico: https://www.kaggle.com/datasets/kevinmorgado/spanish-news-classification
Para este dataset se ha discriminado la columna con noticias y los temas Macroeconomía, Innovación, Regulaciones, Alianzas, Reputación, los que han sido etiquetados con (0)
El dataset también contenía el tema Sustentabilidad como tema pero fue eliminado (solo requerimos textos no relacionados).
- Traduccion de opiniones relacionadas al cambio climatico: https://data.world/crowdflower/sentiment-of-climate-change
En este dataset todas las opiniones son relacionadas al cambio climatico por lo que fueron etiquetadas con (1). Se ha realizado una limpieza de datos quitando harshtags, usernames y emogis para usar solo el contenido textual de los tweets.
- Traduccion de tweets de noticias no relacionadas al cambio climatico: https://www.kaggle.com/datasets/muhammadmemoon/los-angeles-twitter-news-dataset
En este dataset las noticias estan categorizadas y tienen longitud corta (como las opiniones)todo texto es no relacionado al cambio climatico por lo que fueron etiquetados con (0). Se ha realizado una limpieza de datos quitando harshtags, usernames y emogis para usar solo el contenido textual de los tweets. Se ha elegido este dataset para equilibrar la cantidad de texto relacionado y para incluir textos cortos no relacionados al entrenamiento.
### Tareas en las que se puede utilizar:
Clasificación binaria sobre párrafos relacionados a cambio climatico o sustentabilidad.
## Estructura del dataset:
- **question:** Texto
- **answer:** etiqueta binaria, si el texto es relacionado a cambio climatico o sustentabilidad (1) si el texto no es relacionado (0)
- **dominio:** Identifica a que tema esta relacionada el texto, en nuestro caso tenemos 3 tipos "cambio_climatico_reportes", "prensa_miscelaneo", "cambio_climatico". Cambio climatico reportes hace referencia a los parrafos que hablan de cambio climatico pero fueron extraidos de reportes anuales corporativos. Prensa miscelaneo son parrafos de temas diversos extraidos de prensa. Cambio climatico, todos los parrafos que hablen de esta tematica y no tengan alguna fuenta de información especial.
- **Pais de origen:** De donde provienen estos datos geograficamente. Incluimos 3 categorías: "global", "España", "USA". Global son los datos que fueron tomados de fuentes que no indican su origen especifico pero sabemos que fueron tomados de repositorios de datos con fuentes de cualquier país de origen.
- **Idioma:** Variedad geografica del español utilizado. En este caso utilizamos 2 tipos "es_pe", "es_esp", esto debido a que muchos de los datos tuvieron que ser traducidos del ingles a español, se realizaron anotaciones utilizando el idioma regional del equipo que colaboró con la traducción.
- **Registro:** Variedad funcional del lenguaje. Dentro de este dataset se identifican 3 tipos: "culto", "medio", "coloquial" en dependencia del origen de los datos.
- **Tarea:** Identifica a que fin esta destinado el dato de entrada.
- **Periodo:** En que época se ubica el lenguaje utilizado. Este dataset se utiliza lenguaje actual.
### Ejemplo de una instancia:
```
{
'question': 'En enero de 2020, se introdujo en Australia un nuevo método de estimación para notificar las emisiones de gas no contabilizadas (UAFG) resultantes de las actividades de distribución de gas natural. Este método permite utilizar valores de UAFG específicos de cada emplazamiento/red, lo que nos permite traducir las actividades de mantenimiento y sustitución de la red en reducciones notificables de las emisiones de UAFG.',
'answer': 1
'dominio':'cambio_climatico_reportes'
'país_de_origen':'global'
'idioma':'es_pe'
'registro':'culto'
'tarea':'clasificacion'
'periodo':'actual'
}
```
### Esta dividido en:
- train:
| Número | Label | % |
|----------|----------|----------|
| 1600 | 1 | 55% |
| 1300 | 0 | 45% |
- test:
| Número | Label | % |
|----------|----------|----------|
| 480 | 1 | 62% |
| 300 | 0 | 38% |
|
wellecks/naturalproofs-gen | ---
license: mit
tags:
- math
- theorem-proving
---
## Dataset Description
- **Repository:** [wellecks/naturalprover](https://github.com/wellecks/naturalprover)
- **Paper:** [NaturalProver: Grounded Mathematical Proof Generation with Language Models](https://openreview.net/pdf?id=rhdfTOiXBng)
- **Point of Contact:** [Sean Welleck](https://wellecks.com/)
# Naturalproofs-gen
This dataset contains the `Naturalproofs-gen` corpus from:
[NaturalProver: Grounded Mathematical Proof Generation with Language Models](https://arxiv.org/pdf/2205.12910.pdf)\
Sean Welleck\*, Jiacheng Liu\*, Ximing Lu, Hannaneh Hajishirzi, Yejin Choi\
NeurIPS 2022
### Licensing Information
MIT
### Citation Information
Please cite:
```
@inproceedings{welleck2022naturalprover,
title={NaturalProver: Grounded Mathematical Proof Generation with Language Models},
author={Sean Welleck and Jiacheng Liu and Ximing Lu and Hannaneh Hajishirzi and Yejin Choi},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=rhdfTOiXBng}
}
```
Naturalproofs-gen was built from the Naturalproofs corpus:
```
@inproceedings{welleck2021naturalproofs,
title={NaturalProofs: Mathematical Theorem Proving in Natural Language},
author={Sean Welleck and Jiacheng Liu and Ronan Le Bras and Hannaneh Hajishirzi and Yejin Choi and Kyunghyun Cho},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},
year={2021},
url={https://openreview.net/forum?id=Jvxa8adr3iY}
}
``` |
annyorange/colorized_people-dataset | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: colorized_image
dtype: image
splits:
- name: train
num_bytes: 35880418.0
num_examples: 766
download_size: 35928923
dataset_size: 35880418.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "colorized_people-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
deepghs/anime_portrait | ---
license: openrail
task_categories:
- image-classification
tags:
- art
- not-for-all-audiences
size_categories:
- 10K<n<100K
--- |
bigscience-data/roots_vi_wiktionary | ---
language: vi
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
|
MinnaCatpp15/Kai | ---
language:
- ja
- en
- th
tags:
- music
pretty_name: kaikun
size_categories:
- 1K<n<10K
--- |
CyberHarem/mai_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mai (Pokémon)
This is the dataset of mai (Pokémon), containing 114 images and their tags.
The core tags of this character are `black_hair, short_hair, breasts, blue_eyes, hair_ornament, mole, mole_under_mouth, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 114 | 111.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 114 | 66.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 247 | 129.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 114 | 101.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 247 | 182.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mai_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, hair_bow, smile, closed_mouth, pantyhose, pokemon_(creature), white_bow, blush, gothic_lolita, solo, black_dress, detached_sleeves, eyelashes, looking_at_viewer, simple_background |
| 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | looking_at_viewer, 1girl, smile, solo, blush, closed_mouth, hood, jacket |
| 2 | 19 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blush, hetero, pussy, sex, open_mouth, penis, 1boy, vaginal, nipples, spread_legs, tongue, cum, mosaic_censoring, nude, pantyhose, torn_clothes, uncensored |
| 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1boy, 1girl, hetero, penis, blush, fellatio, solo_focus, cum_in_mouth, jacket, censored, looking_at_viewer |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hair_bow | smile | closed_mouth | pantyhose | pokemon_(creature) | white_bow | blush | gothic_lolita | solo | black_dress | detached_sleeves | eyelashes | looking_at_viewer | simple_background | hood | jacket | hetero | pussy | sex | open_mouth | penis | 1boy | vaginal | nipples | spread_legs | tongue | cum | mosaic_censoring | nude | torn_clothes | uncensored | fellatio | solo_focus | cum_in_mouth | censored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:---------------|:------------|:---------------------|:------------|:--------|:----------------|:-------|:--------------|:-------------------|:------------|:--------------------|:--------------------|:-------|:---------|:---------|:--------|:------|:-------------|:--------|:-------|:----------|:----------|:--------------|:---------|:------|:-------------------|:-------|:---------------|:-------------|:-----------|:-------------|:---------------|:-----------|
| 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | | | | X | | X | | | | X | | X | X | | | | | | | | | | | | | | | | | | | |
| 2 | 19 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | | X | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | | | | | X | | | | | | X | | | X | X | | | | X | X | | | | | | | | | | X | X | X | X |
|
ctu-aic/csfever_v2 | ---
license: cc-by-sa-3.0
task_categories:
- text-classification
- text-retrieval
task_ids:
- natural-language-inference
- document-retrieval
language:
- cs
tags:
- Fact-checking
pretty_name: CsFEVERv2
multilinguality: monolingual
source_datasets: fever
size_categories:
- 100K<n<1M
---
# Dataset Card for "CsFEVERv2"
## Dataset Description
CsFEVERv2 is a dataset for Czech fact-checking developed as part of a bachelor thesis at the Artificial Intelligence Center of the Faculty of Electrical Engineering of
the Czech technical university in Prague. The dataset consists of an **original** subset, which is only an iteration of CsFEVER with new data and better processing and
**f1**, **precision**, and **07** subsets filtered using an NLI model and optimized threshold values. The subset **wiki_pages** is a processed Wikipedia dump from
August 2022 with correct revids. This subset should be used to map evidence from datasets to Wikipedia texts. Additionaly preprocessed datasets **original_nli**, **f1_nli**, **precision_nli**, **07_nli**,
for training of NLI models are included.
The original subset can be used to generate other filtered datasets by filtering with other thresholds using predicted_label and predicted_score fields.
### Languages
Czech
## Dataset Usage Example
```python
from datasets import load_dataset
#load default (original) subset
dataset = load_dataset("/home/mlynatom/csfever_v2")
dataset = load_dataset("/home/mlynatom/csfever_v2", "original")
#load f1, f1_nli, precision, precision_nli, 07, and 07_nli subsets
dataset = load_dataset("/home/mlynatom/csfever_v2", "f1")
#load wiki_pages subset
dataset = load_dataset("/home/mlynatom/csfever_v2", "wiki_pages")
```
## Dataset Structure
### Data Instances
#### original
An example of 'train' looks as follows.
```json
{'id': 75397,
'label': 'SUPPORTS',
'predicted_label': 'SUPPORTS',
'predicted_score': 0.921731
'claim': 'Nikolaj Coster-Waldau pracoval pro Fox Broadcasting Company.',
'evidence': [ [ "Nikolaj Coster-Waldau", "Nikolaj Coster-Waldau" ], [ "Fox Broadcasting Company", "Fox Broadcasting Company" ] ]}
```
#### f1, precision, 07
An example of 'train' looks as follows.
```json
{'id': 75397,
'label': 'SUPPORTS',
'claim': 'Nikolaj Coster-Waldau pracoval pro Fox Broadcasting Company.',
'evidence': [ [ "Nikolaj Coster-Waldau", "Nikolaj Coster-Waldau" ], [ "Fox Broadcasting Company", "Fox Broadcasting Company" ] ]}
```
#### original_nli, f1_nli, precision_nli, 07_nli
An example of 'train' looks as follows.
```json
{'id': 155439,
'label': 2,
'claim': 'Newcastle United FC vyhrál pět ligových titulů.',
'evidence': "Ronnie Simpson. Ronnie Simpson (21. října 1930, Glasgow – 19. dubna 2004, Edinburgh) byl skotský fotbalový brankář..."}
```
#### wiki_pages
An example of 'wiki_pages' looks as follows.
```json
{'id': 80916,
'revid': 20561555,
'url': "https://cs.wikipedia.org/wiki?curid=80916",
'title': "Altruismus",
'text': "Altruismus (z lat. "alter", druhý, 3. pád "altrui", druhému) je moderní ..."}
```
### Data Fields
#### original
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `predicted_label`: a `string` feature. (label predicted by NLI model)
- `predicted_score`: a `int32` feature. (confidence of predicted_label predicted by NLI model)
- `claim`: a `string` feature.
- `evidence`: a `sequence` feature.
#### f1, precision, 07
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence`: a `sequence` feature.
#### original_nli, f1_nli, precision_nli, 07_nli
- `id`: a `int32` feature.
- `label`: a `int32` feature.
- `claim`: a `string` feature.
- `evidence`: a `string` feature.
#### wiki_pages
- `id`: a `int32` feature.
- `revid`: a `int32` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
### Data Splits
#### original
| | train | dev | test |
|----------|-------:|-----:|------:|
| original | 118950 | 7458 | 7520 |
#### f1
| | train | dev | test |
|----|------:|-----:|-----:|
| f1 | 83438 | 5445 | 5328 |
#### precision
| | train | dev | test |
|-----------|-------:|-----:|------:|
| precision | 60828 | 4288 | 4236 |
#### 07
| | train | dev | test |
|----|-------:|-----:|------:|
| 07 | 108607 | 6685 | 6623 |
#### wiki_pages
| | wiki_pages |
|------------|-----------:|
| wiki_pages | 825078 |
# Citation
```bibtex
@article{Ullrich_2023,
doi = {10.1007/s10579-023-09654-3},
url = {https://doi.org/10.1007%2Fs10579-023-09654-3},
year = 2023,
month = {may},
publisher = {Springer Science and Business Media {LLC}},
author = {Herbert Ullrich and Jan Drchal and Martin Rýpar and Hana Vincourová and Václav Moravec},
title = {{CsFEVER} and {CTKFacts}: acquiring Czech data for fact verification},
journal = {Language Resources and Evaluation},
archivePrefix={arXiv},
eprint={2201.11115},
}
```
```bibtex
@thesis{Mlynar_2023,
author = {Mlynář, Tomáš},
type = {Bachelor's Thesis}
title = {Automated Fact Checking Based on Czech Wikipedia},
institution = {Czech Technical University in Prague, Faculty of Electrical Engineering},
date = {2023},
url = {http://hdl.handle.net/10467/109219}
}
```
|
mete12e3/fert | ---
license: bigscience-openrail-m
---
|
CyberHarem/manticore_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of manticore/マンティコア/狮蝎 (Arknights)
This is the dataset of manticore/マンティコア/狮蝎 (Arknights), containing 294 images and their tags.
The core tags of this character are `head_wings, long_hair, wings, purple_hair, pointy_ears, purple_eyes, hair_ornament, tail, twintails, breasts, hairclip, pink_eyes, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 294 | 550.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manticore_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 294 | 450.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manticore_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 777 | 911.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/manticore_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/manticore_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 40 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, official_alternate_costume, solo, veil, long_sleeves, very_long_hair, looking_at_viewer, twin_braids, black_leotard, bodystocking, simple_background, white_background, black_footwear, thigh_boots, black_rose, hair_flower, closed_mouth, blue_nails, nail_polish |
| 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blue_nails, long_sleeves, looking_at_viewer, nail_polish, official_alternate_costume, solo, upper_body, aqua_nails, black_jacket, scarf, blunt_bangs, hands_up, jewelry, parted_lips, pink_hair, simple_background, white_background |
| 2 | 23 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, long_sleeves, official_alternate_costume, solo, looking_at_viewer, necklace, black_pantyhose, black_jacket, black_dress, nail_polish, sitting, blue_nails, scarf, open_jacket, black_footwear |
| 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_jacket, simple_background, white_background, white_shirt, fur-trimmed_hood, fur-trimmed_jacket, hooded_jacket, long_sleeves, looking_at_viewer, solo, upper_body, blush, hood_down, nail_polish, open_jacket, dog_tags, blue_nails, cleavage, necklace |
| 4 | 26 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, bandaged_leg, solo, black_shorts, fur_trim, long_sleeves, looking_at_viewer, black_footwear, black_jacket, white_shirt, short_shorts, simple_background, boots, white_background, dog_tags, midriff, navel, full_body, white_socks, crop_top, hooded_jacket, sitting, hood_down, open_jacket |
| 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, belt, black_jacket, black_shorts, cowboy_shot, fur_trim, long_sleeves, midriff, navel, short_shorts, solo, white_shirt, crop_top, hood_down, looking_at_viewer, open_clothes, white_background, parted_lips, simple_background |
| 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, belt, black_jacket, black_shorts, long_sleeves, looking_at_viewer, midriff, navel, short_shorts, solo, stomach, thighs, white_shirt, crop_top_overhang, dog_tags, bandaged_leg, cowboy_shot, fur_trim, medium_breasts, open_jacket, standing, hood_down, hooded_jacket, necklace, simple_background, groin, white_background |
| 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1boy, 1girl, blush, hetero, nipples, solo_focus, fur-trimmed_jacket, licking_penis, paizuri, tongue_out, alternate_breast_size, erection, huge_breasts, mosaic_censoring, open_clothes, open_mouth, saliva, shirt_lift, sweat, white_shirt, bar_censor, bare_shoulders, black_background, black_jacket, breasts_out, cum_on_breasts, dog_tags, ejaculation, grey_background, hooded_coat, large_penis, long_sleeves, looking_at_viewer, nail_polish, pov, simple_background, tank_top |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | official_alternate_costume | solo | veil | long_sleeves | very_long_hair | looking_at_viewer | twin_braids | black_leotard | bodystocking | simple_background | white_background | black_footwear | thigh_boots | black_rose | hair_flower | closed_mouth | blue_nails | nail_polish | upper_body | aqua_nails | black_jacket | scarf | blunt_bangs | hands_up | jewelry | parted_lips | pink_hair | necklace | black_pantyhose | black_dress | sitting | open_jacket | white_shirt | fur-trimmed_hood | fur-trimmed_jacket | hooded_jacket | blush | hood_down | dog_tags | cleavage | bandaged_leg | black_shorts | fur_trim | short_shorts | boots | midriff | navel | full_body | white_socks | crop_top | belt | cowboy_shot | open_clothes | stomach | thighs | crop_top_overhang | medium_breasts | standing | groin | 1boy | hetero | nipples | solo_focus | licking_penis | paizuri | tongue_out | alternate_breast_size | erection | huge_breasts | mosaic_censoring | open_mouth | saliva | shirt_lift | sweat | bar_censor | bare_shoulders | black_background | breasts_out | cum_on_breasts | ejaculation | grey_background | hooded_coat | large_penis | pov | tank_top |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------------------|:-------|:-------|:---------------|:-----------------|:--------------------|:--------------|:----------------|:---------------|:--------------------|:-------------------|:-----------------|:--------------|:-------------|:--------------|:---------------|:-------------|:--------------|:-------------|:-------------|:---------------|:--------|:--------------|:-----------|:----------|:--------------|:------------|:-----------|:------------------|:--------------|:----------|:--------------|:--------------|:-------------------|:---------------------|:----------------|:--------|:------------|:-----------|:-----------|:---------------|:---------------|:-----------|:---------------|:--------|:----------|:--------|:------------|:--------------|:-----------|:-------|:--------------|:---------------|:----------|:---------|:--------------------|:-----------------|:-----------|:--------|:-------|:---------|:----------|:-------------|:----------------|:----------|:-------------|:------------------------|:-----------|:---------------|:-------------------|:-------------|:---------|:-------------|:--------|:-------------|:-----------------|:-------------------|:--------------|:-----------------|:--------------|:------------------|:--------------|:--------------|:------|:-----------|
| 0 | 40 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | | X | | | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 23 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | X | | | | | | X | | | | | X | X | | | X | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | | X | | X | | | | X | X | | | | | | X | X | X | | X | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 26 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | | X | | X | | | | X | X | X | | | | | | | | | X | | | | | | | | | | X | X | X | | | X | | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | | X | | X | | | | X | X | | | | | | | | | | X | | | | | X | | | | | | | X | | | | | X | | | | X | X | X | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | X | | X | | X | | | | X | X | | | | | | | | | | X | | | | | | | X | | | | X | X | | | X | | X | X | | X | X | X | X | | X | X | | | | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | X | | X | | | | X | | | | | | | | X | | | X | | | | | | | | | | | | X | | X | | X | | X | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Crystalcareai/CodeFeedback-Alpaca | ---
license: apache-2.0
---
|
shishir-dwi/News-Article-Categorization_IAB | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
language:
- en
tags:
- news articles
- IAB categories
- dataset
- articles
- IAB
pretty_name: IAB categorization Dataset
size_categories:
- 100K<n<1M
---
# Article and Category Dataset
![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)
## Overview
This dataset contains a collection of articles, primarily news articles, along with their respective IAB (Interactive Advertising Bureau) categories. It can be a valuable resource for various natural language processing (NLP) tasks, including text classification, text generation, and more.
## Dataset Information
- **Number of Samples:** 871,909
- **Number of Categories:** 26
### Column Information
- **text:** The text of the article.
- **target:** The IAB category label corresponding to the article.
## IAB Categories
The Interactive Advertising Bureau (IAB) categories are a standardized taxonomy used in the advertising industry to categorize digital advertising content. These categories help advertisers and marketers target their audience more effectively. Each category is represented by a label or code that indicates the content's topic or theme.
## Potential Use Cases
- **Text Classification:** Use this dataset to train and evaluate text classification models to predict IAB categories for articles.
- **Text Generation:** Utilize the articles in this dataset as a source for text generation tasks, such as generating news headlines or summaries.
- **Topic Modeling:** Explore the dataset to discover underlying topics and themes in the articles.
- **Information Retrieval:** Build search engines or recommendation systems that use article content and categories to retrieve relevant articles for users.
## Data Format
The dataset is provided in a standard tabular format with two columns: "text" and "target". You can easily load and manipulate the data using popular data manipulation libraries such as pandas in Python.
## License
This dataset is available under the [Apache 2.0 License](LICENSE.md). Please review the license before using the dataset for any purpose.
|
MJFMBR/MJ | ---
license: openrail
---
|
derenrich/wikidata-enwiki-categories-and-statements | ---
language:
- en
license: cc-by-3.0
size_categories:
- 1M<n<10M
task_categories:
- text-classification
pretty_name: Predict Wikidata Type From Enwiki Categories
tags:
- wikidata
- wikipedia
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: qid
dtype: string
- name: relation
dtype: string
- name: target_qid
dtype: string
- name: relation_id
dtype: int64
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1725928619.6360972
num_examples: 6534445
- name: test
num_bytes: 191769993.36390287
num_examples: 726050
download_size: 1003767773
dataset_size: 1917698613.0
---
|
ovior/twitter_dataset_1713037973 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2231039
num_examples: 6946
download_size: 1254975
dataset_size: 2231039
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
oliverjthomas2000/finetune | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8756
num_examples: 199
download_size: 1363
dataset_size: 8756
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
san457/my_dataset | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 79302267.0
num_examples: 3
download_size: 77773397
dataset_size: 79302267.0
---
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bdsaglam/web_nlg-erx-sft-sharegpt | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 17245615
num_examples: 35426
- name: dev
num_bytes: 2177164
num_examples: 4464
- name: test
num_bytes: 3803957
num_examples: 7305
download_size: 2699280
dataset_size: 23226736
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
|
TheGreatRambler/mm2_user_posted | ---
language:
- multilingual
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- other
- object-detection
- text-retrieval
- token-classification
- text-generation
task_ids: []
pretty_name: Mario Maker 2 user uploaded
tags:
- text-mining
---
# Mario Maker 2 user uploaded
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user uploaded dataset consists of 26.5 million uploaded user levels from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
### How to use it
The Mario Maker 2 user uploaded dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
```python
from datasets import load_dataset
ds = load_dataset("TheGreatRambler/mm2_user_posted", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'pid': '10491033288855085861',
'data_id': 27359486
}
```
Each row is a unique uploaded level denoted by the `data_id` uploaded by the player denoted by the `pid`.
You can also download the full dataset. Note that this will download ~215MB:
```python
ds = load_dataset("TheGreatRambler/mm2_user_posted", split="train")
```
## Data Structure
### Data Instances
```python
{
'pid': '10491033288855085861',
'data_id': 27359486
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|pid|string|The player ID of this user, an unsigned 64 bit integer as a string|
|data_id|int|The data ID of the level this user uploaded|
### Data Splits
The dataset only contains a train split.
<!-- TODO create detailed statistics -->
## Dataset Creation
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
## Considerations for Using the Data
The dataset contains no harmful language or depictions.
|
BEE-spoke-data/coedit-reworded-deduped | ---
license: apache-2.0
dataset_info:
- config_name: dedup-by-target
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 23629242
num_examples: 79943
download_size: 11836738
dataset_size: 23629242
- config_name: dedup-input
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 23457166
num_examples: 79293
download_size: 11795306
dataset_size: 23457166
- config_name: default
features:
- name: task
dtype: string
- name: id
dtype: string
- name: original_instruction
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: update_type
dtype: string
splits:
- name: train
num_bytes: 25021311
num_examples: 79943
download_size: 11862526
dataset_size: 25021311
configs:
- config_name: dedup-by-target
data_files:
- split: train
path: dedup-by-target/train-*
- config_name: dedup-input
data_files:
- split: train
path: dedup-input/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
source_dataasets: chargoddard/coedit-reworded
---
# BEE-spoke-data/coedit-reworded-deduped
Minhash deduplication on the `target` column. Source data from [coedit-reworded](https://hf.co/chargoddard/coedit-reworded)
## load
```
from datasets import load_dataset
dataset = load_dataset("BEE-spoke-data/coedit-reworded-deduped", revision="refs/convert/parquet")
dataset
```
output:
```python
DatasetDict({
train: Dataset({
features: ['task', 'id', 'original_instruction', 'instruction', 'input', 'output'],
num_rows: 79943
})
})
```
## Citation
Original dataset courtesy of Grammarly:
```
@article{raheja2023coedit,
title={CoEdIT: Text Editing by Task-Specific Instruction Tuning},
author={Vipul Raheja and Dhruv Kumar and Ryan Koo and Dongyeop Kang},
year={2023},
eprint={2305.09857},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
RyokoAI/BLiterature-260M | ---
license: apache-2.0
language:
- jp
tags:
- blogs
- training
- text
- not-for-all-audiences
task_categories:
- text-classification
- text-generation
pretty_name: BLiterature
size_categories:
- 100M<n<1B
---
# Dataset Card for BLiterature
*BLiterature is part of a bigger project that is not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** KaraKaraWitch
### Dataset Summary
BLiterature is a raw dataset dump consisting of text from at most 260,261,224 blog posts (excluding categories and date-grouped posts) from blog.fc2.com.
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* Japanese
## Dataset Structure
All the files are located in jsonl files that has been compressed into archives of 7z.
### Data Instances
```json
["http://1kimono.blog49.fc2.com/blog-entry-50.html",
"<!DOCTYPE HTML\n\tPUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\"\n\t\t\"http://www.w3.org/TR/html4/loose.dtd\">\n<!--\n<!DOCTYPE HTML\n\tPUBLIC \"-//W3C//DTD HTML 4.01//EN\"\n\t\t\"http://www.w3.org/T... (TRUNCATED)"]
```
### Data Fields
There is only 2 fields in the list. URL and content retrieved. content retrieved may contain values which the scraper ran into issues. If so they are marked in xml such as such.
```<?xml version="1.0" encoding="utf-8"?><error>Specifc Error</error>```
URLs may not match the final url in which the page was retrieved from. As they may be redirects present while scraping.
#### Q-Score Distribution
Not Applicable
### Data Splits
The jsonl files were split roughly every 2,500,000 posts. Allow for a slight deviation of 5000 additional posts due to how the files were saved.
## Dataset Creation
### Curation Rationale
fc2 is a Japanese blog hosting website which offers a place for anyone to host their blog on. As a result, the language used compared to other more official sources is more informal and relaxed as anyone can post whatever they personally want.
### Source Data
#### Initial Data Collection and Normalization
None. No normalization is performed as this is a raw dump of the dataset.
#### Who are the source language producers?
The authors of each blog, which may include others to post on their blog domain as well.
### Annotations
#### Annotation process
No Annotations are present.
#### Who are the annotators?
No human annotators.
### Personal and Sensitive Information
As this dataset contains information from individuals, there is a more likely chance to find personally identifiable information. However, we believe that the author has pre-vetted their posts in good faith to avoid such occurrences.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset contains real life referances and revolves around Japanese culture. As such there will be a bias towards it.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
KaraKaraWitch
### Licensing Information
Apache 2.0, for all parts of which KaraKaraWitch may be considered authors. All other material is distributed under fair use principles.
Ronsor Labs additionally is allowed to relicense the dataset as long as it has gone through processing.
### Citation Information
```
@misc{bliterature,
title = {BLiterature: fc2 blogs for the masses.},
author = {KaraKaraWitch},
year = {2023},
howpublished = {\url{https://huggingface.co/datasets/KaraKaraWitch/BLiterature}},
}
```
### Name Etymology
[Literature (リテラチュア) - Reina Ueda (上田麗奈)](https://www.youtube.com/watch?v=Xo1g5HWgaRA)
`Blogs` > `B` + `Literature` > `BLiterature`
### Contributions
- [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset.
- [neggles (Github)](https://github.com/neggles) for providing compute for the gathering of dataset. |
eddie-jin/mini-platypus | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245921
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2 | ---
pretty_name: Evaluation run of PulsarAI/SlimOpenOrca-Mistral-7B-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [PulsarAI/SlimOpenOrca-Mistral-7B-v2](https://huggingface.co/PulsarAI/SlimOpenOrca-Mistral-7B-v2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-12T18:15:51.369317](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2_public/blob/main/results_2023-11-12T18-15-51.369317.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6159393027066592,\n\
\ \"acc_stderr\": 0.032593338844127864,\n \"acc_norm\": 0.6242559279403389,\n\
\ \"acc_norm_stderr\": 0.03329458303258477,\n \"mc1\": 0.3929008567931457,\n\
\ \"mc1_stderr\": 0.017097248285233065,\n \"mc2\": 0.5664808334981362,\n\
\ \"mc2_stderr\": 0.015491636686254535,\n \"em\": 0.004718959731543624,\n\
\ \"em_stderr\": 0.0007018360183131115,\n \"f1\": 0.09190750838926176,\n\
\ \"f1_stderr\": 0.0018302287340192876\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5938566552901023,\n \"acc_stderr\": 0.014351656690097858,\n\
\ \"acc_norm\": 0.628839590443686,\n \"acc_norm_stderr\": 0.014117971901142824\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6448914558852819,\n\
\ \"acc_stderr\": 0.004775681871529862,\n \"acc_norm\": 0.8340967934674368,\n\
\ \"acc_norm_stderr\": 0.003712334763856884\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n\
\ \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n\
\ \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.028815615713432108,\n\
\ \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.028815615713432108\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n\
\ \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n\
\ \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5549132947976878,\n\
\ \"acc_stderr\": 0.03789401760283648,\n \"acc_norm\": 0.5549132947976878,\n\
\ \"acc_norm_stderr\": 0.03789401760283648\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467381,\n\
\ \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467381\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n\
\ \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n\
\ \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482758,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482758\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41534391534391535,\n \"acc_stderr\": 0.02537952491077839,\n \"\
acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.02537952491077839\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7483870967741936,\n\
\ \"acc_stderr\": 0.024685979286239963,\n \"acc_norm\": 0.7483870967741936,\n\
\ \"acc_norm_stderr\": 0.024685979286239963\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4630541871921182,\n \"acc_stderr\": 0.035083705204426656,\n\
\ \"acc_norm\": 0.4630541871921182,\n \"acc_norm_stderr\": 0.035083705204426656\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\"\
: 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7575757575757576,\n \"acc_stderr\": 0.030532892233932022,\n \"\
acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.030532892233932022\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.02541634309630645,\n\
\ \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.02541634309630645\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5974358974358974,\n \"acc_stderr\": 0.02486499515976775,\n \
\ \"acc_norm\": 0.5974358974358974,\n \"acc_norm_stderr\": 0.02486499515976775\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35555555555555557,\n \"acc_stderr\": 0.029185714949857413,\n \
\ \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.029185714949857413\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.031204691225150016,\n\
\ \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.031204691225150016\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.271523178807947,\n \"acc_stderr\": 0.03631329803969653,\n \"acc_norm\"\
: 0.271523178807947,\n \"acc_norm_stderr\": 0.03631329803969653\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8311926605504587,\n\
\ \"acc_stderr\": 0.01606005626853035,\n \"acc_norm\": 0.8311926605504587,\n\
\ \"acc_norm_stderr\": 0.01606005626853035\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n\
\ \"acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8137254901960784,\n \"acc_stderr\": 0.027325470966716312,\n \"\
acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.027325470966716312\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7890295358649789,\n \"acc_stderr\": 0.02655837250266192,\n \
\ \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.02655837250266192\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6547085201793722,\n\
\ \"acc_stderr\": 0.03191100192835794,\n \"acc_norm\": 0.6547085201793722,\n\
\ \"acc_norm_stderr\": 0.03191100192835794\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596915,\n\
\ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596915\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.036959801280988226,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.036959801280988226\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n\
\ \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n\
\ \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n\
\ \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n\
\ \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n\
\ \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.80970625798212,\n\
\ \"acc_stderr\": 0.014036945850381401,\n \"acc_norm\": 0.80970625798212,\n\
\ \"acc_norm_stderr\": 0.014036945850381401\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.024946792225272314,\n\
\ \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.024946792225272314\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3474860335195531,\n\
\ \"acc_stderr\": 0.01592556406020815,\n \"acc_norm\": 0.3474860335195531,\n\
\ \"acc_norm_stderr\": 0.01592556406020815\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6993464052287581,\n \"acc_stderr\": 0.026256053835718964,\n\
\ \"acc_norm\": 0.6993464052287581,\n \"acc_norm_stderr\": 0.026256053835718964\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6655948553054662,\n\
\ \"acc_stderr\": 0.026795422327893937,\n \"acc_norm\": 0.6655948553054662,\n\
\ \"acc_norm_stderr\": 0.026795422327893937\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7160493827160493,\n \"acc_stderr\": 0.025089478523765134,\n\
\ \"acc_norm\": 0.7160493827160493,\n \"acc_norm_stderr\": 0.025089478523765134\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.43617021276595747,\n \"acc_stderr\": 0.02958345203628407,\n \
\ \"acc_norm\": 0.43617021276595747,\n \"acc_norm_stderr\": 0.02958345203628407\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4530638852672751,\n\
\ \"acc_stderr\": 0.012713845972358978,\n \"acc_norm\": 0.4530638852672751,\n\
\ \"acc_norm_stderr\": 0.012713845972358978\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n\
\ \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6421568627450981,\n \"acc_stderr\": 0.019393058402355442,\n \
\ \"acc_norm\": 0.6421568627450981,\n \"acc_norm_stderr\": 0.019393058402355442\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n\
\ \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n\
\ \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.028920583220675606,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.028920583220675606\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3929008567931457,\n\
\ \"mc1_stderr\": 0.017097248285233065,\n \"mc2\": 0.5664808334981362,\n\
\ \"mc2_stderr\": 0.015491636686254535\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774099\n\
\ },\n \"harness|drop|3\": {\n \"em\": 0.004718959731543624,\n \
\ \"em_stderr\": 0.0007018360183131115,\n \"f1\": 0.09190750838926176,\n\
\ \"f1_stderr\": 0.0018302287340192876\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.18953752843062927,\n \"acc_stderr\": 0.010795837931896387\n\
\ }\n}\n```"
repo_url: https://huggingface.co/PulsarAI/SlimOpenOrca-Mistral-7B-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|arc:challenge|25_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|drop|3_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|gsm8k|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hellaswag|10_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-11-12T18-15-51.369317.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-management|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-virology|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|truthfulqa:mc|0_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-11-12T18-15-51.369317.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- '**/details_harness|winogrande|5_2023-11-12T18-15-51.369317.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-12T18-15-51.369317.parquet'
- config_name: results
data_files:
- split: 2023_11_12T18_15_51.369317
path:
- results_2023-11-12T18-15-51.369317.parquet
- split: latest
path:
- results_2023-11-12T18-15-51.369317.parquet
---
# Dataset Card for Evaluation run of PulsarAI/SlimOpenOrca-Mistral-7B-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PulsarAI/SlimOpenOrca-Mistral-7B-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [PulsarAI/SlimOpenOrca-Mistral-7B-v2](https://huggingface.co/PulsarAI/SlimOpenOrca-Mistral-7B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-12T18:15:51.369317](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__SlimOpenOrca-Mistral-7B-v2_public/blob/main/results_2023-11-12T18-15-51.369317.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6159393027066592,
"acc_stderr": 0.032593338844127864,
"acc_norm": 0.6242559279403389,
"acc_norm_stderr": 0.03329458303258477,
"mc1": 0.3929008567931457,
"mc1_stderr": 0.017097248285233065,
"mc2": 0.5664808334981362,
"mc2_stderr": 0.015491636686254535,
"em": 0.004718959731543624,
"em_stderr": 0.0007018360183131115,
"f1": 0.09190750838926176,
"f1_stderr": 0.0018302287340192876
},
"harness|arc:challenge|25": {
"acc": 0.5938566552901023,
"acc_stderr": 0.014351656690097858,
"acc_norm": 0.628839590443686,
"acc_norm_stderr": 0.014117971901142824
},
"harness|hellaswag|10": {
"acc": 0.6448914558852819,
"acc_stderr": 0.004775681871529862,
"acc_norm": 0.8340967934674368,
"acc_norm_stderr": 0.003712334763856884
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.04256193767901408,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.04256193767901408
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.028815615713432108,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.028815615713432108
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5549132947976878,
"acc_stderr": 0.03789401760283648,
"acc_norm": 0.5549132947976878,
"acc_norm_stderr": 0.03789401760283648
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.03261936918467381,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.03261936918467381
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.046446020912223177,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.046446020912223177
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482758,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482758
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.02537952491077839,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.02537952491077839
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7483870967741936,
"acc_stderr": 0.024685979286239963,
"acc_norm": 0.7483870967741936,
"acc_norm_stderr": 0.024685979286239963
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4630541871921182,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.4630541871921182,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.030532892233932022,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.030532892233932022
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.02541634309630645,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.02541634309630645
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5974358974358974,
"acc_stderr": 0.02486499515976775,
"acc_norm": 0.5974358974358974,
"acc_norm_stderr": 0.02486499515976775
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35555555555555557,
"acc_stderr": 0.029185714949857413,
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.029185714949857413
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6386554621848739,
"acc_stderr": 0.031204691225150016,
"acc_norm": 0.6386554621848739,
"acc_norm_stderr": 0.031204691225150016
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.271523178807947,
"acc_stderr": 0.03631329803969653,
"acc_norm": 0.271523178807947,
"acc_norm_stderr": 0.03631329803969653
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8311926605504587,
"acc_stderr": 0.01606005626853035,
"acc_norm": 0.8311926605504587,
"acc_norm_stderr": 0.01606005626853035
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.027325470966716312,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.027325470966716312
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.02655837250266192,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.02655837250266192
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6547085201793722,
"acc_stderr": 0.03191100192835794,
"acc_norm": 0.6547085201793722,
"acc_norm_stderr": 0.03191100192835794
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596915,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596915
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.036959801280988226,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.036959801280988226
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.023086635086841407,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.023086635086841407
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.80970625798212,
"acc_stderr": 0.014036945850381401,
"acc_norm": 0.80970625798212,
"acc_norm_stderr": 0.014036945850381401
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.024946792225272314,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.024946792225272314
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3474860335195531,
"acc_stderr": 0.01592556406020815,
"acc_norm": 0.3474860335195531,
"acc_norm_stderr": 0.01592556406020815
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6993464052287581,
"acc_stderr": 0.026256053835718964,
"acc_norm": 0.6993464052287581,
"acc_norm_stderr": 0.026256053835718964
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6655948553054662,
"acc_stderr": 0.026795422327893937,
"acc_norm": 0.6655948553054662,
"acc_norm_stderr": 0.026795422327893937
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7160493827160493,
"acc_stderr": 0.025089478523765134,
"acc_norm": 0.7160493827160493,
"acc_norm_stderr": 0.025089478523765134
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.43617021276595747,
"acc_stderr": 0.02958345203628407,
"acc_norm": 0.43617021276595747,
"acc_norm_stderr": 0.02958345203628407
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4530638852672751,
"acc_stderr": 0.012713845972358978,
"acc_norm": 0.4530638852672751,
"acc_norm_stderr": 0.012713845972358978
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6066176470588235,
"acc_stderr": 0.029674288281311155,
"acc_norm": 0.6066176470588235,
"acc_norm_stderr": 0.029674288281311155
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6421568627450981,
"acc_stderr": 0.019393058402355442,
"acc_norm": 0.6421568627450981,
"acc_norm_stderr": 0.019393058402355442
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.028920583220675606,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.028920583220675606
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3929008567931457,
"mc1_stderr": 0.017097248285233065,
"mc2": 0.5664808334981362,
"mc2_stderr": 0.015491636686254535
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774099
},
"harness|drop|3": {
"em": 0.004718959731543624,
"em_stderr": 0.0007018360183131115,
"f1": 0.09190750838926176,
"f1_stderr": 0.0018302287340192876
},
"harness|gsm8k|5": {
"acc": 0.18953752843062927,
"acc_stderr": 0.010795837931896387
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ariji1/acn-finetuning | ---
license: apache-2.0
---
|
liuyanchen1015/MULTI_VALUE_sst2_double_comparative | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 5055
num_examples: 33
- name: test
num_bytes: 7939
num_examples: 53
- name: train
num_bytes: 145931
num_examples: 1282
download_size: 77671
dataset_size: 158925
---
# Dataset Card for "MULTI_VALUE_sst2_double_comparative"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
manu/theses_fr_2013_2023 | ---
dataset_info:
features:
- name: title_fr
dtype: string
- name: abstract_fr
dtype: string
- name: title_en
dtype: string
- name: abstract_en
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 392127399
num_examples: 97320
download_size: 224948329
dataset_size: 392127399
---
# Dataset Card for "theses_fr_2013_2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jullarson/sdd | ---
license: apache-2.0
---
|
THUDM/LongAlign-10k | ---
task_categories:
- question-answering
language:
- en
- zh
tags:
- Long Context
- sft
size_categories:
- 10K<n<100K
---
# LongAlign-10k
<p align="center">
🤗 <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> • 💻 <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> • 📃 <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a>
</p>
**LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length.
## All Models
We open-sourced the following list of models:
|Model|Huggingface Repo|Description|
|---|---|---|
|**LongAlign-6B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k-base) | **ChatGLM3-6B** with an extended 64k context window |
|**LongAlign-6B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k) | Chat model by LongAlign training on LongAlign-6B-64k-base|
|**LongAlign-7B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k-base) | **Llama-2-7B** with an extended 64k context window |
|**LongAlign-7B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k) | Chat model by LongAlign training on LongAlign-7B-64k-base|
|**LongAlign-13B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k-base) | **Llama-2-13B** with an extended 64k context window |
|**LongAlign-13B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k) | Chat model by LongAlign training on LongAlign-13B-64k-base|
|**ChatGLM3-6B-128k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/chatglm3-6b-128k) | **ChatGLM3-6B** with a 128k context window| |
autoevaluate/autoeval-staging-eval-project-a3656eb0-b7ed-410f-ab65-0222b8e06770-4139 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
joseluhf11/oct-object-detection-v4-merge | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: objects
struct:
- name: bbox
sequence:
sequence: int64
- name: categories
sequence: string
splits:
- name: train
num_bytes: 70990022.0
num_examples: 566
download_size: 70811624
dataset_size: 70990022.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
---
# Dataset Card for "oct-object-detection-v4-merge"
Dataset is composed of images with multiples object detection box in coco format (xmin, ymin, xmax, ymax). Images are OCT (type of eye scaner) with boxes indicating some features associated to AMD disease.
The difference from v3 is images are grouped (not duplicated images in multiples row) and they can have multiples labels-boxes in the objects field. So there are, 566 unique images, there are 566 rows, one per image.
Also, overlapped boxes are joined as merge function
[Source datataset](https://doi.org/10.1101/2023.03.29.534704) |
paullatham1/reddit-val-balanced | ---
dataset_info:
features:
- name: 'Unnamed: 0.1'
dtype: int64
- name: 'Unnamed: 0'
dtype: int64
- name: is_sarcastic
dtype: int64
- name: data
dtype: string
- name: is_sarcastic.1
dtype: int64
splits:
- name: train
num_bytes: 288034
num_examples: 3966
download_size: 181270
dataset_size: 288034
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/sagiri_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of sagiri (Kantai Collection)
This is the dataset of sagiri (Kantai Collection), containing 388 images and their tags.
The core tags of this character are `grey_hair, long_hair, bangs, purple_eyes, hairband, swept_bangs, asymmetrical_bangs, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 388 | 361.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagiri_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 388 | 234.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagiri_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 864 | 480.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagiri_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 388 | 332.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagiri_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 864 | 631.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sagiri_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sagiri_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, casual_one-piece_swimsuit, hair_flower, looking_at_viewer, official_alternate_costume, solo, white_one-piece_swimsuit, earrings, frilled_swimsuit, cowboy_shot, highleg_swimsuit, shawl, covered_navel, small_breasts, white_choker |
| 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, long_sleeves, simple_background, smile, open_mouth, white_background, blush, looking_at_viewer, official_alternate_costume, holding, twitter_username, white_dress |
| 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, white_shirt, looking_at_viewer, simple_background, black_choker, black_hairband, suspender_skirt, plaid_skirt, white_background, bag, blush, boots, official_alternate_costume, open_mouth, smile, twitter_username, umbrella, upper_body |
| 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, alternate_costume, solo, looking_at_viewer, purple_shirt, white_skirt, blouse, smile, long_sleeves, simple_background, bag, long_skirt, white_background |
| 4 | 12 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, grey_sailor_collar, grey_skirt, pleated_skirt, serafuku, short_sleeves, solo, looking_at_viewer, simple_background, smile, bow, grey_ribbon, blue_hairband, purple_hairband, white_background, cowboy_shot, twitter_username |
| 5 | 10 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | yukata, 1girl, obi, solo, open_mouth, smile, looking_at_viewer, simple_background, alternate_costume, alternate_hairstyle, white_background, floral_print, hair_ornament, wide_sleeves |
| 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, detached_collar, fake_animal_ears, looking_at_viewer, playboy_bunny, rabbit_ears, simple_background, solo, strapless_leotard, white_background, wrist_cuffs, alternate_costume, blush, open_mouth, small_breasts, bowtie, cowboy_shot, white_leotard, ass_visible_through_thighs, bare_shoulders, blue_bow, covered_navel, pantyhose, rabbit_tail |
| 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, bar_censor, blush, hetero, penis, 1boy, solo_focus, open_mouth, sex, vaginal, blue_hairband, bra, cum, nipples, nude, small_breasts, tears |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | casual_one-piece_swimsuit | hair_flower | looking_at_viewer | official_alternate_costume | solo | white_one-piece_swimsuit | earrings | frilled_swimsuit | cowboy_shot | highleg_swimsuit | shawl | covered_navel | small_breasts | white_choker | long_sleeves | simple_background | smile | open_mouth | white_background | blush | holding | twitter_username | white_dress | white_shirt | black_choker | black_hairband | suspender_skirt | plaid_skirt | bag | boots | umbrella | upper_body | alternate_costume | purple_shirt | white_skirt | blouse | long_skirt | grey_sailor_collar | grey_skirt | pleated_skirt | serafuku | short_sleeves | bow | grey_ribbon | blue_hairband | purple_hairband | yukata | obi | alternate_hairstyle | floral_print | hair_ornament | wide_sleeves | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | strapless_leotard | wrist_cuffs | bowtie | white_leotard | ass_visible_through_thighs | bare_shoulders | blue_bow | pantyhose | rabbit_tail | bar_censor | hetero | penis | 1boy | solo_focus | sex | vaginal | bra | cum | nipples | nude | tears |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------------|:--------------|:--------------------|:-----------------------------|:-------|:---------------------------|:-----------|:-------------------|:--------------|:-------------------|:--------|:----------------|:----------------|:---------------|:---------------|:--------------------|:--------|:-------------|:-------------------|:--------|:----------|:-------------------|:--------------|:--------------|:---------------|:-----------------|:------------------|:--------------|:------|:--------|:-----------|:-------------|:--------------------|:---------------|:--------------|:---------|:-------------|:---------------------|:-------------|:----------------|:-----------|:----------------|:------|:--------------|:----------------|:------------------|:---------|:------|:----------------------|:---------------|:----------------|:---------------|:------------------|:-------------------|:----------------|:--------------|:--------------------|:--------------|:---------|:----------------|:-----------------------------|:-----------------|:-----------|:------------|:--------------|:-------------|:---------|:--------|:-------|:-------------|:------|:----------|:------|:------|:----------|:-------|:--------|
| 0 | 21 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | X | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | X | X | X | | | | | | | | | | | X | X | X | X | X | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | | X | | | | | | | | | | X | X | X | | X | | | | | | | | | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 12 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | | X | | | | X | | | | | | | X | X | | X | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 10 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | X | | X | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | X | | X | | | | X | | | X | X | | | X | | X | X | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | | | | | | | | | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_Zangs3011__mistral_7b_2EPOCH_DolphinCoder | ---
pretty_name: Evaluation run of Zangs3011/mistral_7b_2EPOCH_DolphinCoder
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Zangs3011/mistral_7b_2EPOCH_DolphinCoder](https://huggingface.co/Zangs3011/mistral_7b_2EPOCH_DolphinCoder)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Zangs3011__mistral_7b_2EPOCH_DolphinCoder\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-19T04:55:31.577709](https://huggingface.co/datasets/open-llm-leaderboard/details_Zangs3011__mistral_7b_2EPOCH_DolphinCoder/blob/main/results_2024-01-19T04-55-31.577709.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.590189563445543,\n\
\ \"acc_stderr\": 0.033213747146494416,\n \"acc_norm\": 0.5975943163476723,\n\
\ \"acc_norm_stderr\": 0.03391041523451993,\n \"mc1\": 0.2974296205630355,\n\
\ \"mc1_stderr\": 0.016002651487361005,\n \"mc2\": 0.44646084605621383,\n\
\ \"mc2_stderr\": 0.014640949505732814\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.568259385665529,\n \"acc_stderr\": 0.014474591427196202,\n\
\ \"acc_norm\": 0.6075085324232082,\n \"acc_norm_stderr\": 0.014269634635670722\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6229834694284008,\n\
\ \"acc_stderr\": 0.004836486437527263,\n \"acc_norm\": 0.8114917347142003,\n\
\ \"acc_norm_stderr\": 0.003903181667466359\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.04605661864718381,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.04605661864718381\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5986842105263158,\n \"acc_stderr\": 0.039889037033362836,\n\
\ \"acc_norm\": 0.5986842105263158,\n \"acc_norm_stderr\": 0.039889037033362836\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n\
\ \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.630188679245283,\n \"acc_stderr\": 0.029711421880107936,\n\
\ \"acc_norm\": 0.630188679245283,\n \"acc_norm_stderr\": 0.029711421880107936\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n\
\ \"acc_stderr\": 0.038990736873573344,\n \"acc_norm\": 0.6805555555555556,\n\
\ \"acc_norm_stderr\": 0.038990736873573344\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\"\
: 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5664739884393064,\n\
\ \"acc_stderr\": 0.03778621079092056,\n \"acc_norm\": 0.5664739884393064,\n\
\ \"acc_norm_stderr\": 0.03778621079092056\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929777,\n\
\ \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929777\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224469,\n\
\ \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224469\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3968253968253968,\n \"acc_stderr\": 0.02519710107424649,\n \"\
acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.02519710107424649\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n\
\ \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n\
\ \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6806451612903226,\n\
\ \"acc_stderr\": 0.026522709674667765,\n \"acc_norm\": 0.6806451612903226,\n\
\ \"acc_norm_stderr\": 0.026522709674667765\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4187192118226601,\n \"acc_stderr\": 0.03471192860518468,\n\
\ \"acc_norm\": 0.4187192118226601,\n \"acc_norm_stderr\": 0.03471192860518468\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.696969696969697,\n \"acc_stderr\": 0.03588624800091706,\n\
\ \"acc_norm\": 0.696969696969697,\n \"acc_norm_stderr\": 0.03588624800091706\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7373737373737373,\n \"acc_stderr\": 0.03135305009533086,\n \"\
acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.03135305009533086\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.02649905770139746,\n\
\ \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.02649905770139746\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5512820512820513,\n \"acc_stderr\": 0.025217315184846482,\n\
\ \"acc_norm\": 0.5512820512820513,\n \"acc_norm_stderr\": 0.025217315184846482\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3074074074074074,\n \"acc_stderr\": 0.02813325257881564,\n \
\ \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.02813325257881564\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6302521008403361,\n \"acc_stderr\": 0.03135709599613591,\n \
\ \"acc_norm\": 0.6302521008403361,\n \"acc_norm_stderr\": 0.03135709599613591\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7853211009174312,\n \"acc_stderr\": 0.01760430414925648,\n \"\
acc_norm\": 0.7853211009174312,\n \"acc_norm_stderr\": 0.01760430414925648\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7598039215686274,\n\
\ \"acc_stderr\": 0.02998373305591362,\n \"acc_norm\": 0.7598039215686274,\n\
\ \"acc_norm_stderr\": 0.02998373305591362\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.7172995780590717,\n \"acc_stderr\": 0.029312814153955934,\n\
\ \"acc_norm\": 0.7172995780590717,\n \"acc_norm_stderr\": 0.029312814153955934\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6457399103139013,\n\
\ \"acc_stderr\": 0.032100621541349864,\n \"acc_norm\": 0.6457399103139013,\n\
\ \"acc_norm_stderr\": 0.032100621541349864\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596914,\n\
\ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596914\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7355371900826446,\n \"acc_stderr\": 0.04026187527591205,\n \"\
acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.04026187527591205\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7037037037037037,\n\
\ \"acc_stderr\": 0.044143436668549335,\n \"acc_norm\": 0.7037037037037037,\n\
\ \"acc_norm_stderr\": 0.044143436668549335\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6871165644171779,\n \"acc_stderr\": 0.036429145782924055,\n\
\ \"acc_norm\": 0.6871165644171779,\n \"acc_norm_stderr\": 0.036429145782924055\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n\
\ \"acc_stderr\": 0.023636873317489288,\n \"acc_norm\": 0.8461538461538461,\n\
\ \"acc_norm_stderr\": 0.023636873317489288\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7701149425287356,\n\
\ \"acc_stderr\": 0.01504630184669182,\n \"acc_norm\": 0.7701149425287356,\n\
\ \"acc_norm_stderr\": 0.01504630184669182\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.025070713719153183,\n\
\ \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.025070713719153183\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2927374301675978,\n\
\ \"acc_stderr\": 0.015218109544410179,\n \"acc_norm\": 0.2927374301675978,\n\
\ \"acc_norm_stderr\": 0.015218109544410179\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.0267874531119065,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.0267874531119065\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6527331189710611,\n\
\ \"acc_stderr\": 0.027040745502307336,\n \"acc_norm\": 0.6527331189710611,\n\
\ \"acc_norm_stderr\": 0.027040745502307336\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6574074074074074,\n \"acc_stderr\": 0.026406145973625686,\n\
\ \"acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.026406145973625686\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4219858156028369,\n \"acc_stderr\": 0.029462189233370593,\n \
\ \"acc_norm\": 0.4219858156028369,\n \"acc_norm_stderr\": 0.029462189233370593\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4322033898305085,\n\
\ \"acc_stderr\": 0.012652297777114968,\n \"acc_norm\": 0.4322033898305085,\n\
\ \"acc_norm_stderr\": 0.012652297777114968\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.028418208619406752,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.028418208619406752\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6225490196078431,\n \"acc_stderr\": 0.01961085147488029,\n \
\ \"acc_norm\": 0.6225490196078431,\n \"acc_norm_stderr\": 0.01961085147488029\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.04582004841505417,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.04582004841505417\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6571428571428571,\n \"acc_stderr\": 0.030387262919547728,\n\
\ \"acc_norm\": 0.6571428571428571,\n \"acc_norm_stderr\": 0.030387262919547728\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n\
\ \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n\
\ \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.031885780176863984,\n\
\ \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.031885780176863984\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2974296205630355,\n\
\ \"mc1_stderr\": 0.016002651487361005,\n \"mc2\": 0.44646084605621383,\n\
\ \"mc2_stderr\": 0.014640949505732814\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7324388318863457,\n \"acc_stderr\": 0.01244171845689301\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.23881728582259287,\n \
\ \"acc_stderr\": 0.011744097081003805\n }\n}\n```"
repo_url: https://huggingface.co/Zangs3011/mistral_7b_2EPOCH_DolphinCoder
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|arc:challenge|25_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|gsm8k|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hellaswag|10_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T04-55-31.577709.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-19T04-55-31.577709.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- '**/details_harness|winogrande|5_2024-01-19T04-55-31.577709.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-19T04-55-31.577709.parquet'
- config_name: results
data_files:
- split: 2024_01_19T04_55_31.577709
path:
- results_2024-01-19T04-55-31.577709.parquet
- split: latest
path:
- results_2024-01-19T04-55-31.577709.parquet
---
# Dataset Card for Evaluation run of Zangs3011/mistral_7b_2EPOCH_DolphinCoder
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Zangs3011/mistral_7b_2EPOCH_DolphinCoder](https://huggingface.co/Zangs3011/mistral_7b_2EPOCH_DolphinCoder) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Zangs3011__mistral_7b_2EPOCH_DolphinCoder",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-19T04:55:31.577709](https://huggingface.co/datasets/open-llm-leaderboard/details_Zangs3011__mistral_7b_2EPOCH_DolphinCoder/blob/main/results_2024-01-19T04-55-31.577709.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.590189563445543,
"acc_stderr": 0.033213747146494416,
"acc_norm": 0.5975943163476723,
"acc_norm_stderr": 0.03391041523451993,
"mc1": 0.2974296205630355,
"mc1_stderr": 0.016002651487361005,
"mc2": 0.44646084605621383,
"mc2_stderr": 0.014640949505732814
},
"harness|arc:challenge|25": {
"acc": 0.568259385665529,
"acc_stderr": 0.014474591427196202,
"acc_norm": 0.6075085324232082,
"acc_norm_stderr": 0.014269634635670722
},
"harness|hellaswag|10": {
"acc": 0.6229834694284008,
"acc_stderr": 0.004836486437527263,
"acc_norm": 0.8114917347142003,
"acc_norm_stderr": 0.003903181667466359
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.04605661864718381,
"acc_norm": 0.3,
"acc_norm_stderr": 0.04605661864718381
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5986842105263158,
"acc_stderr": 0.039889037033362836,
"acc_norm": 0.5986842105263158,
"acc_norm_stderr": 0.039889037033362836
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.630188679245283,
"acc_stderr": 0.029711421880107936,
"acc_norm": 0.630188679245283,
"acc_norm_stderr": 0.029711421880107936
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.038990736873573344,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.038990736873573344
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5664739884393064,
"acc_stderr": 0.03778621079092056,
"acc_norm": 0.5664739884393064,
"acc_norm_stderr": 0.03778621079092056
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.04533838195929777,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.04533838195929777
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.03232146916224469,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.03232146916224469
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.02519710107424649,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.02519710107424649
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377562,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377562
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6806451612903226,
"acc_stderr": 0.026522709674667765,
"acc_norm": 0.6806451612903226,
"acc_norm_stderr": 0.026522709674667765
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4187192118226601,
"acc_stderr": 0.03471192860518468,
"acc_norm": 0.4187192118226601,
"acc_norm_stderr": 0.03471192860518468
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.696969696969697,
"acc_stderr": 0.03588624800091706,
"acc_norm": 0.696969696969697,
"acc_norm_stderr": 0.03588624800091706
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7373737373737373,
"acc_stderr": 0.03135305009533086,
"acc_norm": 0.7373737373737373,
"acc_norm_stderr": 0.03135305009533086
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.02649905770139746,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.02649905770139746
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5512820512820513,
"acc_stderr": 0.025217315184846482,
"acc_norm": 0.5512820512820513,
"acc_norm_stderr": 0.025217315184846482
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3074074074074074,
"acc_stderr": 0.02813325257881564,
"acc_norm": 0.3074074074074074,
"acc_norm_stderr": 0.02813325257881564
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6302521008403361,
"acc_stderr": 0.03135709599613591,
"acc_norm": 0.6302521008403361,
"acc_norm_stderr": 0.03135709599613591
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.038615575462551684,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.038615575462551684
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7853211009174312,
"acc_stderr": 0.01760430414925648,
"acc_norm": 0.7853211009174312,
"acc_norm_stderr": 0.01760430414925648
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7598039215686274,
"acc_stderr": 0.02998373305591362,
"acc_norm": 0.7598039215686274,
"acc_norm_stderr": 0.02998373305591362
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7172995780590717,
"acc_stderr": 0.029312814153955934,
"acc_norm": 0.7172995780590717,
"acc_norm_stderr": 0.029312814153955934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6457399103139013,
"acc_stderr": 0.032100621541349864,
"acc_norm": 0.6457399103139013,
"acc_norm_stderr": 0.032100621541349864
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596914,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596914
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7355371900826446,
"acc_stderr": 0.04026187527591205,
"acc_norm": 0.7355371900826446,
"acc_norm_stderr": 0.04026187527591205
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.044143436668549335,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.044143436668549335
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6871165644171779,
"acc_stderr": 0.036429145782924055,
"acc_norm": 0.6871165644171779,
"acc_norm_stderr": 0.036429145782924055
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8461538461538461,
"acc_stderr": 0.023636873317489288,
"acc_norm": 0.8461538461538461,
"acc_norm_stderr": 0.023636873317489288
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7701149425287356,
"acc_stderr": 0.01504630184669182,
"acc_norm": 0.7701149425287356,
"acc_norm_stderr": 0.01504630184669182
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.025070713719153183,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.025070713719153183
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2927374301675978,
"acc_stderr": 0.015218109544410179,
"acc_norm": 0.2927374301675978,
"acc_norm_stderr": 0.015218109544410179
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.0267874531119065,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.0267874531119065
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6527331189710611,
"acc_stderr": 0.027040745502307336,
"acc_norm": 0.6527331189710611,
"acc_norm_stderr": 0.027040745502307336
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.026406145973625686,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.026406145973625686
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4219858156028369,
"acc_stderr": 0.029462189233370593,
"acc_norm": 0.4219858156028369,
"acc_norm_stderr": 0.029462189233370593
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4322033898305085,
"acc_stderr": 0.012652297777114968,
"acc_norm": 0.4322033898305085,
"acc_norm_stderr": 0.012652297777114968
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.028418208619406752,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.028418208619406752
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6225490196078431,
"acc_stderr": 0.01961085147488029,
"acc_norm": 0.6225490196078431,
"acc_norm_stderr": 0.01961085147488029
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.04582004841505417,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.04582004841505417
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6571428571428571,
"acc_stderr": 0.030387262919547728,
"acc_norm": 0.6571428571428571,
"acc_norm_stderr": 0.030387262919547728
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7810945273631841,
"acc_stderr": 0.029239174636647,
"acc_norm": 0.7810945273631841,
"acc_norm_stderr": 0.029239174636647
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.031885780176863984,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.031885780176863984
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2974296205630355,
"mc1_stderr": 0.016002651487361005,
"mc2": 0.44646084605621383,
"mc2_stderr": 0.014640949505732814
},
"harness|winogrande|5": {
"acc": 0.7324388318863457,
"acc_stderr": 0.01244171845689301
},
"harness|gsm8k|5": {
"acc": 0.23881728582259287,
"acc_stderr": 0.011744097081003805
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Brecon/Claim_Validation | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 167303
num_examples: 153
download_size: 88825
dataset_size: 167303
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Claim_Validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nateraw/fuego-20230203-121124-88b549 | ---
tags:
- fuego
fuego:
id: 20230203-121124-88b549
status: running
script: main.py
requirements_file: requirements.txt
space_id: nateraw/fuego-20230203-121124-88b549
space_hardware: cpu-basic
github_repo_id: pytorch/examples
github_repo_branch: main
github_repo_sha: d8456a36d1bbb22f72b003f59406a19a0a0547c3
---
|
calm-and-collected/wish-you-were-here | ---
license: cc-by-4.0
language:
- en
tags:
- photography
- art
pretty_name: Wish You were Here
size_categories:
- n<1K
---
# Wish You Were Here - Dataset
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6537927953b7eb25ce03c962/QzRgHMnueca5SAzqUG8hD.png)
A dataset conisting out of postcards from 1900-1960 annoted with a combination of CLIP and manual annotation.
## datastructure
The dataset is strucured as follows:
- Images of postcards.
- Text file desribing the image.
- Images are liked to text file via the name of the text file.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6537927953b7eb25ce03c962/2TupHi3B_UP614McMHFpN.png)
## Metadata
| Size of dataset in quantity | Size of dataset in storage | Repeating images | Source annotation |
|---|---|---|---|
| 646 | 1.6 Gb | Yes | No |
### Collection method:
Manual search of WikiMedia pages and selection of images with attributes that allow for the usage of images without permission or attribution of the creator of the media. Licenses include:
- CC-O
- No license
- Public domain
### Annotation method:
The data was annotated using Kohya_SS in 2 phases:
1. Automated annotation using Clip.
2. Manual annotation.
During manual annotation the following features were consitently annotated:
- Type of Postcard (drawing, photograph, colored in photograph)
- Aspect ration (horizontal, vertical or square)
- Border color (if there is a border)
- Damage of the postcard (ranging from no annotation, slightly damage, damage, significant damage)
- Stamps
- Folding damage
- Lineart
- Monochrome (color images are not specified)
### Image dataset composition:
The dataset compromises of postcards originating from Germany, Poland, Russia and the United States of America. No additional annotation provded to identify where the postcards are from.
Most of the postcards depict a bias towards nature scenes E.G snowy mountain valleys at sunset. Training a model could create a bias towards these images.
## license
This dataset is licensed under CC BY 4.0 Deed. This gives you the rights to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Under the following terms:
- Attribution - You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions - You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
The license only applies to the descriptions of the images. Not to the images themselves (see collection method for more details). |
atmallen/popqa-parents-lying-non-err | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: true_label
dtype: int64
splits:
- name: train
num_bytes: 2417517.0
num_examples: 23952
- name: validation
num_bytes: 521514.0
num_examples: 5136
- name: test
num_bytes: 525331.5
num_examples: 5160
download_size: 544025
dataset_size: 3464362.5
---
# Dataset Card for "popqa-parents-lying-non-err"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sam-mosaic/iv4-msg | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2497963572.0
num_examples: 433525
- name: test
num_bytes: 345259991.0
num_examples: 53935
download_size: 1399738698
dataset_size: 2843223563.0
---
# Dataset Card for "iv4-msg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yunij/tokenized_datasets | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: source
dtype: string
- name: label
dtype: int64
- name: perplexity
dtype: float64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 777879009
num_examples: 330345
- name: test
num_bytes: 40979430
num_examples: 17387
download_size: 432466136
dataset_size: 818858439
---
# Dataset Card for "tokenized_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/nia_granbluefantasy | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nia/니아 (Granblue Fantasy)
This is the dataset of nia/니아 (Granblue Fantasy), containing 333 images and their tags.
The core tags of this character are `long_hair, animal_ears, black_hair, red_eyes, bangs, breasts, hair_between_eyes, earrings, ear_piercing`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 333 | 503.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nia_granbluefantasy/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 333 | 282.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nia_granbluefantasy/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 817 | 607.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nia_granbluefantasy/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 333 | 446.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nia_granbluefantasy/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 817 | 878.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nia_granbluefantasy/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nia_granbluefantasy',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, erune, looking_at_viewer, piercing, solo, jewelry, long_sleeves, black_skirt, simple_background, white_background, bags_under_eyes, parted_lips |
| 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bags_under_eyes, erune, jewelry, solo, upper_body, looking_at_viewer, simple_background, white_background, piercing |
| 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, bare_shoulders, braid, erune, large_breasts, looking_at_viewer, solo, black_one-piece_swimsuit, blush, cleavage, official_alternate_costume, collarbone, covered_navel, closed_mouth, simple_background, sitting, thighs |
| 3 | 14 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, ass, bare_shoulders, blush, erune, looking_at_viewer, official_alternate_costume, solo, looking_back, butt_crack, medium_breasts, sideboob, water, bikini, from_behind, twin_braids, thighs, smile, white_background, black_one-piece_swimsuit, simple_background, wet |
| 4 | 9 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, erune, looking_at_viewer, solo, black_gloves, black_dress, hair_flower, blue_rose, long_sleeves, petals, puffy_sleeves, simple_background, smile, white_background |
| 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, bare_shoulders, blue_dress, blue_flower, detached_sleeves, erune, hair_flower, solo, blush, looking_at_viewer, sleeveless_dress, very_long_hair, belt, collarbone, medium_breasts, crying_with_eyes_open, puffy_short_sleeves, white_background, bridal_gauntlets, choker, closed_mouth, hand_up, heart, smile, upper_body |
| 6 | 11 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, blush, erune, large_breasts, jewelry, nipples, 1boy, hetero, solo_focus, pussy, smile, bar_censor, looking_at_viewer, open_mouth, penis, breasts_out, sweat, completely_nude, female_pubic_hair, on_back |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | erune | looking_at_viewer | piercing | solo | jewelry | long_sleeves | black_skirt | simple_background | white_background | bags_under_eyes | parted_lips | upper_body | bare_shoulders | braid | large_breasts | black_one-piece_swimsuit | blush | cleavage | official_alternate_costume | collarbone | covered_navel | closed_mouth | sitting | thighs | ass | looking_back | butt_crack | medium_breasts | sideboob | water | bikini | from_behind | twin_braids | smile | wet | black_gloves | black_dress | hair_flower | blue_rose | petals | puffy_sleeves | blue_dress | blue_flower | detached_sleeves | sleeveless_dress | very_long_hair | belt | crying_with_eyes_open | puffy_short_sleeves | bridal_gauntlets | choker | hand_up | heart | nipples | 1boy | hetero | solo_focus | pussy | bar_censor | open_mouth | penis | breasts_out | sweat | completely_nude | female_pubic_hair | on_back |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------------------|:-----------|:-------|:----------|:---------------|:--------------|:--------------------|:-------------------|:------------------|:--------------|:-------------|:-----------------|:--------|:----------------|:---------------------------|:--------|:-----------|:-----------------------------|:-------------|:----------------|:---------------|:----------|:---------|:------|:---------------|:-------------|:-----------------|:-----------|:--------|:---------|:--------------|:--------------|:--------|:------|:---------------|:--------------|:--------------|:------------|:---------|:----------------|:-------------|:--------------|:-------------------|:-------------------|:-----------------|:-------|:------------------------|:----------------------|:-------------------|:---------|:----------|:--------|:----------|:-------|:---------|:-------------|:--------|:-------------|:-------------|:--------|:--------------|:--------|:------------------|:--------------------|:----------|
| 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 14 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | | X | | | | X | X | | | | X | | | X | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 9 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | | X | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 8 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | | X | | | | | X | | | X | X | | | | X | | | X | | X | | | | | | X | | | | | | X | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 6 | 11 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | | | X | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
edarchimbaud/perimeter-sp500 | ---
language:
- en
license: mit
task_categories:
- tabular-classification
dataset_info:
features:
- name: symbol
dtype: string
- name: security
dtype: string
- name: gics_sector
dtype: string
- name: gics_sub_industry
dtype: string
splits:
- name: train
num_bytes: 35469
num_examples: 503
download_size: 0
dataset_size: 35469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "index-constituents-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The index-constituents-sp500 dataset provides information about the constituents of the S&P 500 index. It contains several features that describe each constituent company.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- security (string): A string specifying the name or title of the security.
- gics_sector (string): A string indicating the Global Industry Classification Standard (GICS) sector to which the company belongs. GICS is a widely used classification system for categorizing companies based on their primary business activities.
- gics_sub_industry (string): A string specifying the GICS sub-industry of the company, which provides further granularity within the sector classification.
- headquarters_location (string): A string representing the location of the company's headquarters.
- date_added (string): A string indicating the date when the company was added to the S&P 500 index.
- cik (string): A string representing the Central Index Key (CIK) assigned to the company by the United States Securities and Exchange Commission (SEC). The CIK is a unique identifier used for regulatory filings.
- founded (string): A string indicating the year or date of the company's founding.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The index-constituents-sp500 dataset was developed to support the development of low-frequency trading algorithms.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from the web, and aggregated.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The index-constituents-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The index-constituents-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, index-constituents-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
jan-hq/ultrafeedback_quality_binarized | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen-rating
dtype: float64
- name: chosen-model
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected-rating
dtype: float64
- name: rejected-model
dtype: string
splits:
- name: train
num_bytes: 654240429.3032981
num_examples: 139196
- name: test
num_bytes: 72697036.69670185
num_examples: 15467
download_size: 396128426
dataset_size: 726937466.0
---
# Dataset Card for "ultrafeedback_quality_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yp-edu/stockfish-debug | ---
license: mit
source_datasets:
- BlueSunflower/chess_games_base
configs:
- config_name: default
data_files:
- split: train
path: "train.jsonl"
- split: test
path: "test.jsonl"
dataset_info:
features:
- name: fen
dtype: string
- name: move
dtype: string
- name: result
dtype: string
---
# Dataset Card for stockfish-debug
See my [blog post](https://yp-edu.github.io/projects/training-gpt2-on-stockfish-games) for additional details.
## Columns
The datase contain the following columns:
- **fen:** The FEN string of the board.
- **move:** The move that was played.
- **result:** The result of the game (with `"-"` for unfinished games).
## Data details
Pre-processing of the Stockfish games provided by [BlueSunflower/chess_games_base](https://huggingface.co/datasets/BlueSunflower/chess_games_base).
Code used:
```python
import jsonlines
import chess
import tqdm
def preprocess_games(in_path, out_path):
with jsonlines.open(in_path) as reader:
with jsonlines.open(out_path, "w") as writer:
for obj in tqdm.tqdm(reader):
state_action = []
parsed_moves = [m for m in obj["moves"].split() if not m.endswith(".")]
board = chess.Board()
for m in parsed_moves:
fen = board.fen()
move = board.push_san(m)
state_action.append({"fen": fen, "move":move.uci()})
outcome = board.outcome()
if outcome is None:
result = "-"
else:
result = outcome.result()
writer.write_all([
{**sa, "result":result} for sa in state_action
])
```
## Use the Dataset
Using basic `dataset` code:
```python
from datasets import load_dataset
dataset = load_dataset("yp-edu/stockfish-debug")
```
|
zhewenshen/uinauil | ---
dataset_info:
- config_name: eventi
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 2536818
num_examples: 5889
- name: test
num_bytes: 414313
num_examples: 917
download_size: 748319
dataset_size: 2951131
- config_name: facta
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 1048929
num_examples: 2723
- name: test
num_bytes: 748867
num_examples: 1816
download_size: 436679
dataset_size: 1797796
- config_name: haspeede
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': not hate speech
'1': hate speech
splits:
- name: train
num_bytes: 1107858
num_examples: 6839
- name: test
num_bytes: 292096
num_examples: 1263
download_size: 922250
dataset_size: 1399954
- config_name: ironita
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': not ironic
'1': ironic
splits:
- name: train
num_bytes: 481712
num_examples: 3977
- name: test
num_bytes: 102230
num_examples: 872
download_size: 366142
dataset_size: 583942
- config_name: sentipolc
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': negative
'2': positive
'3': mixed
splits:
- name: train
num_bytes: 795582
num_examples: 7410
- name: test
num_bytes: 230399
num_examples: 2000
download_size: 624436
dataset_size: 1025981
- config_name: textualentailment
features:
- name: id
dtype: string
- name: label
dtype: int64
- name: text1
dtype: string
- name: text2
dtype: string
splits:
- name: train
num_bytes: 184571
num_examples: 400
- name: test
num_bytes: 106380
num_examples: 400
download_size: 175008
dataset_size: 290951
configs:
- config_name: eventi
data_files:
- split: train
path: eventi/train-*
- split: test
path: eventi/test-*
- config_name: facta
data_files:
- split: train
path: facta/train-*
- split: test
path: facta/test-*
- config_name: haspeede
data_files:
- split: train
path: haspeede/train-*
- split: test
path: haspeede/test-*
- config_name: ironita
data_files:
- split: train
path: ironita/train-*
- split: test
path: ironita/test-*
- config_name: sentipolc
data_files:
- split: train
path: sentipolc/train-*
- split: test
path: sentipolc/test-*
- config_name: textualentailment
data_files:
- split: train
path: textualentailment/train-*
- split: test
path: textualentailment/test-*
---
|
mHossain/final_train_v4_test_400000 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: input_text
dtype: string
- name: target_text
dtype: string
- name: prefix
dtype: string
splits:
- name: train
num_bytes: 6678342.9
num_examples: 18000
- name: test
num_bytes: 742038.1
num_examples: 2000
download_size: 3194440
dataset_size: 7420381.0
---
# Dataset Card for "final_train_v4_test_400000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yonatanbitton/SeeTRUE | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: seetrue
pretty_name: SeeTRUE
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- image-captioning
- text-image-matching
task_ids: []
extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes, and that SeeTRUE should be used as a *TEST SET*, not as a training set, and especially not to train commercial chatbots. Do not hessitate to contact yonatanbitton@google.com if you have questions about this license."
---
# Dataset Card for SeeTRUE
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
The SeeTRUE dataset is a diverse benchmark for meta-evaluation of image-text alignment methods, covering the 4-way combinations of real and synthetic text-and-image pairs. It addresses limitations in current benchmarks, which mainly focus on natural images and often lack challenging negative captions. SeeTRUE allows to better assess the generalization abilities of text-image alignment models across various tasks.
We will add more datasets from SeeTRUE (e.g., COCO-Con and PickaPic-Con) upon data release.
Paper: https://arxiv.org/abs/2305.10400
Website: https://wysiwyr-itm.github.io/
### Languages
The dataset supports English language.
## Dataset Structure
### Data Fields
- image: The name of the image file.
- text: The text description that matches with the image.
- label: The binary label. 1 if the text matches with the image, 0 otherwise.
- original_dataset_id: The ID of the dataset where the row originates from.
- dataset_source: The source of the dataset.
### Data Splits
SeeTRUE contains a single split: TEST, and should not be used for training.
## Dataset Creation
The dataset has been created by sourcing and matching images and text from multiple datasets. More information in the paper: https://arxiv.org/abs/2305.10400.
### Licensing Information
The dataset is under the CC-By 4.0 license.
### Citation Information
@article{yarom2023you,
title={What You See is What You Read? Improving Text-Image Alignment Evaluation},
author={Yarom, Michal and Bitton, Yonatan and Changpinyo, Soravit and Aharoni, Roee and Herzig, Jonathan and Lang, Oran and Ofek, Eran and Szpektor, Idan},
journal={arXiv preprint arXiv:2305.10400},
year={2023}
} |
NMashalov/ruArxivmmd | ---
dataset_info:
features:
- name: en
dtype: string
- name: ru
dtype: string
splits:
- name: train
num_bytes: 1506208
num_examples: 8
download_size: 676013
dataset_size: 1506208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
grosenthal/lat_en_loeb | ---
dataset_info:
features:
- name: id
dtype: int64
- name: la
dtype: string
- name: en
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 31372661.713349972
num_examples: 81096
- name: test
num_bytes: 3921582.7141687465
num_examples: 10137
- name: valid
num_bytes: 3921969.5724812816
num_examples: 10138
download_size: 25067983
dataset_size: 39216214.0
---
# Dataset Card for "lat_en_loeb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ottopilot/shirome-sd15-data | ---
license: cc-by-nc-nd-4.0
---
|
aaadw/NIPS2023_LLM_Competition | ---
license: apache-2.0
---
|
yuchenlin/NaturalChat_en_zh | ---
configs:
- config_name: sharegpt_zh
data_files:
- split: train
path: "sharegpt_zh.jsonl"
- config_name: sharegpt_en
data_files:
- split: train
path: "sharegpt_en.jsonl"
- config_name: wildchat_zh
data_files:
- split: train
path: "wildeval_zh.jsonl"
- config_name: wildchat_en
data_files:
- split: train
path: "wildeval_en.jsonl"
- config_name: olcc_zh
data_files:
- split: train
path: "olcc_zh.jsonl"
- config_name: man13k_zh
data_files:
- split: train
path: "man13k_zh.jsonl"
--- |
lorinma/Slim-Moss003sft-zh | ---
task_categories:
- text-generation
- conversational
language:
- zh
size_categories:
- 10K<n<100K
---
因为原生的Moss003数量太大,所以进行了简单的去重。
去重方法大致为,只选择中文的对话,使用bert-base-chinese将第一个问题转换为embedding,使用类knn的方法抽取了1万条。并转换成了sharegpt格式。 |
ghbacct/gold-headlines-price-talk-classification | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 573485.5078864354
num_examples: 9129
- name: test
num_bytes: 143418.49211356466
num_examples: 2283
download_size: 380904
dataset_size: 716904.0
---
# Dataset Card for "gold-headlines-price-talk-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lucchesi/VozMarcus1 | ---
license: openrail
---
|
argmaxinc/whisperkit-evals |
---
pretty_name: "WhisperKit ASR Evaluation Results"
viewer: false
library_name: whisperkit
tags:
- whisper
- whisperkit
- coreml
- asr
- quantized
---
# WhisperKit Transcription Quality
## Dataset: `librispeech`
Short-form Audio (<30s/clip) - 5 hours of English audiobook clips
| | WER (↓) | QoI (↑) | File Size (MB) | Code Commit |
|:------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------------:|:---------------------------------------------------------------|
| large-v2 (WhisperOpenAIAPI) | [2.35](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperOpenAIAPI/openai_whisper-large-v2/librispeech) | 100 | 3100 | N/A |
| [large-v2](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2) | [2.77](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v2/librispeech) | 96.6 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/2846fd9) |
| [large-v2_949MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2_949MB) | [2.4](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v2_949MB/librispeech) | 94.6 | 949 | [Link](https://github.com/argmaxinc/WhisperKit/commit/eca4a2e) |
| [large-v2_turbo](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2_turbo) | [2.76](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v2_turbo/librispeech) | 96.6 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/2846fd9) |
| [large-v2_turbo_955MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2_turbo_955MB) | [2.41](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v2_turbo_955MB/librispeech) | 94.6 | 955 | [Link](https://github.com/argmaxinc/WhisperKit/commit/cf75348) |
| [large-v3](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v3) | [2.04](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v3/librispeech) | 95.2 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/2846fd9) |
| [large-v3_turbo](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v3_turbo) | [2.03](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v3_turbo/librispeech) | 95.4 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/2846fd9) |
| [large-v3_turbo_954MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v3_turbo_954MB) | [2.47](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v3_turbo_954MB/librispeech) | 93.9 | 954 | [Link](https://github.com/argmaxinc/WhisperKit/commit/cf75348) |
| [distil-large-v3](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3) | [2.47](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3/librispeech) | 89.7 | 1510 | [Link](https://github.com/argmaxinc/WhisperKit/commit/cf75348) |
| [distil-large-v3_594MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3_594MB) | [2.96](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3_594MB/librispeech) | 85.4 | 594 | [Link](https://github.com/argmaxinc/WhisperKit/commit/508240f) |
| [distil-large-v3_turbo](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3_turbo) | [2.47](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3_turbo/librispeech) | 89.7 | 1510 | [Link](https://github.com/argmaxinc/WhisperKit/commit/508240f) |
| [distil-large-v3_turbo_600MB](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3_turbo_600MB) | [2.78](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3_turbo_600MB/librispeech) | 86.2 | 600 | [Link](https://github.com/argmaxinc/WhisperKit/commit/ae1cf96) |
| [small.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-small.en) | [3.12](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-small.en/librispeech) | 85.8 | 483 | [Link](https://github.com/argmaxinc/WhisperKit/commit/228630c) |
| [small](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-small) | [3.45](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-small/librispeech) | 83 | 483 | [Link](https://github.com/argmaxinc/WhisperKit/commit/228630c) |
| [base.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-base.en) | [3.98](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-base.en/librispeech) | 75.3 | 145 | [Link](https://github.com/argmaxinc/WhisperKit/commit/228630c) |
| [base](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-base) | [4.97](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-base/librispeech) | 67.2 | 145 | [Link](https://github.com/argmaxinc/WhisperKit/commit/228630c) |
| [tiny.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny.en) | [5.61](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-tiny.en/librispeech) | 63.9 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/228630c) |
| [tiny](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny) | [7.47](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-tiny/librispeech) | 52.5 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/228630c) |
## Dataset: `earnings22`
Long-Form Audio (>1hr/clip) - 120 hours of earnings call recordings in English with various accents
| | WER (↓) | QoI (↑) | File Size (MB) | Code Commit |
|:------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------|----------:|-----------------:|:---------------------------------------------------------------|
| large-v2 (WhisperOpenAIAPI) | [16.27](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperOpenAIAPI/openai_whisper-large-v2/earnings22) | 100 | 3100 | N/A |
| [large-v3](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v3) | [15.17](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-large-v3/earnings22) | 58.5 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/2846fd9) |
| [distil-large-v3](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3) | [15.28](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/distil-whisper_distil-large-v3/earnings22) | 46.3 | 1510 | [Link](https://github.com/argmaxinc/WhisperKit/commit/508240f) |
| [base.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-base.en) | [23.49](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-base.en/earnings22) | 6.5 | 145 | [Link](https://github.com/argmaxinc/WhisperKit/commit/dda6571) |
| [tiny.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny.en) | [28.64](https://hf.co/datasets/argmaxinc/whisperkit-evals/tree/main/WhisperKit/openai_whisper-tiny.en/earnings22) | 5.7 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/dda6571) |
### Explanation
We believe that rigorously measuring the quality of inference is necessary for developers and
enterprises to make informed decisions when opting to use optimized or compressed variants of
any machine learning model in production. To contextualize `WhisperKit`, we take the following Whisper
implementations and benchmark them using a consistent evaluation harness:
Server-side:
- `WhisperOpenAIAPI`: [OpenAI's Whisper API](https://platform.openai.com/docs/guides/speech-to-text)
($0.36 per hour of audio as of 02/29/24, 25MB file size limit per request)
On-device:
- `WhisperKit`: Argmax's implementation [[Eval Harness]](https://github.com/argmaxinc/whisperkittools/blob/main/whisperkit/pipelines.py#L100) [[Repo]](https://github.com/argmaxinc/WhisperKit)
- `whisper.cpp`: A C++ implementation form ggerganov [[Eval Harness]](https://github.com/argmaxinc/whisperkittools/blob/main/whisperkit/pipelines.py#L212) [[Repo]](https://github.com/ggerganov/whisper.cpp)
- `WhisperMLX`: A Python implementation from Apple MLX [[Eval Harness]](https://github.com/argmaxinc/whisperkittools/blob/main/whisperkit/pipelines.py#L338) [[Repo]](https://github.com/ml-explore/mlx-examples/blob/main/whisper/whisper/transcribe.py)
(All on-device implementations are available for free under MIT license as of 03/19/2024)
`WhisperOpenAIAPI` sets the reference and we assume that it is using the equivalent of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2)
in float16 precision along with additional undisclosed optimizations from OpenAI. In all measurements, we care primarily about per-example no-regressions (quantified as `qoi` below)
which is a stricter metric compared to dataset average [Word Error RATE (WER)](https://en.wikipedia.org/wiki/Word_error_rate). A 100% `qoi` preserves perfect backwards-compatibility on the test distribution and avoids "perceived regressions", the phenomenon
where per-example known behavior changes after a code/model update and causes divergence in downstream code or breaks the user experience itself (even if dataset averages might stay flat
across updates). Pseudocode for `qoi`:
```python
qoi = []
for example in dataset:
no_regression = wer(optimized_model(example)) <= wer(reference_model(example))
qoi.append(no_regression)
qoi = (sum(qoi) / len(qoi)) * 100.
```
Note that the ordering of models with respect to `WER` does not necessarily match the ordering with respect to `QoI`. This is because the reference model gets assigned
a QoI of 100% by definition. Any per-example regression by other implementations get penalized while per-example improvements are not rewarded. `QoI` (higher is better) matters
where the production behavior is established by the reference results and the goal is to not regress when switching to an optimized or compressed model. On the other hand,
`WER` (lower is better) matters when there is no established production behavior and one is picking the best quality versus model size trade off point.
We anticipate developers that use Whisper (or similar models) in production to have their own Quality Assurance test sets and [whisperkittools](https://github.com/argmaxinc/whisperkittools) offers
the tooling necessary to run the same measurements on such custom test sets, please see the [Model Evaluation on Custom Dataset]((https://github.com/argmaxinc/whisperkittools)) for details.
### Why are there so many Whisper versions?
WhisperKit is an SDK for building speech-to-text features in apps across a wide range of Apple devices. We are working towards abstracting away the model versioning from the developer so WhisperKit
"just works" by deploying the highest-quality model version that a particular device can execute. In the interim, we leave the choice to the developer by providing quality and size trade-offs.
### Datasets
- [librispeech](https://huggingface.co/datasets/argmaxinc/librispeech): ~5 hours of short English audio clips, tests short-form transcription quality
- [earnings22](https://huggingface.co/datasets/argmaxinc/earnings22): ~120 hours of English audio clips from earnings calls with various accents, tests long-form transcription quality
### Reproducing Results
Benchmark results on this page were automatically generated by [whisperkittools](https://github.com/argmaxinc/whisperkittools) using our cluster of Apple Silicon Macs as self-hosted runners on
Github Actions. We periodically recompute these benchmarks as part of our CI pipeline. Due to [security concerns](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#hardening-for-self-hosted-runners),
we are unable to open up the cluster to the public. However, any Apple Silicon Mac (even with 8GB RAM) can be used to
run identical [evaluation jobs](#evaluation) locally. For reference, our M2 Ultra devices complete a `librispeech` + `openai/whisper-large-v3`
evaluation in under 1 hour regardless of the Whisper implementation. Oldest Apple Silicon Macs should take less than 1 day to complete the same evaluation.
### Glossary
- `_turbo`: Indicates the presence of additional optimizations (not compression) to unlock streaming transcription
as described in our [Blog Post](https://www.takeargmax.com/blog/whisperkit).
- `_*MB`: Indicates the presence of model compression. Instead of cluttering the filename with details like
`_AudioEncoder-5.8bits_TextDecoder-6.1bits_QLoRA-rank=16`, we choose to summarize the compression spec as the
resulting total file size since this is what matters to developers in production.
|
gg-ai/es-2610-no-demoji-m | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
- name: clean_text
dtype: string
- name: sent
dtype: int64
splits:
- name: train
num_bytes: 14582372
num_examples: 37614
- name: test
num_bytes: 2804158
num_examples: 7523
- name: val
num_bytes: 728021
num_examples: 1881
download_size: 12052915
dataset_size: 18114551
---
# Dataset Card for "es-2610-no-demoji-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rs0x29a/the-stack-yaml-camel-k | ---
license: apache-2.0
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_count
dtype: int64
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_count
dtype: int64
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_count
dtype: int64
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 297506.9341430791
num_examples: 40
download_size: 66785
dataset_size: 297506.9341430791
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ibranze/araproje_hellaswag_tr_conf1 | ---
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 162703.0
num_examples: 250
download_size: 0
dataset_size: 162703.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_hellaswag_tr_conf1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stanmalkinson199/NerdieTonyMitchell | ---
license: openrail
---
|
Akarsh/autotrain-data-Test | ---
license: bsd-3-clause
---
|
freshpearYoon/v3_train_free_concat_21 | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 3842614024
num_examples: 2500
download_size: 1836648810
dataset_size: 3842614024
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Imriyaz/Warzone | ---
license: mit
---
|
systemk/wiki-ja-5k | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25322028.21875
num_examples: 5000
- name: dev
num_bytes: 1023838.68
num_examples: 200
download_size: 17169345
dataset_size: 26345866.89875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
|
KentoTsu/dogday | ---
license: openrail
---
|
zolak/twitter_dataset_78_1713199494 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3681351
num_examples: 8865
download_size: 1845665
dataset_size: 3681351
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
arslanarjumand/reptiles | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2526495389.0143003
num_examples: 3726
- name: test
num_bytes: 635180006.9010239
num_examples: 929
download_size: 3072085903
dataset_size: 3161675395.915324
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
chop555/chop555_dataset2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3562793
num_examples: 1000
download_size: 42571
dataset_size: 3562793
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wesleywt/williams_mtb_hpidb | ---
dataset_info:
features:
- name: is_interaction
dtype: int64
- name: protein_1.id
dtype: string
- name: protein_1.primary
dtype: string
- name: protein_2.id
dtype: string
- name: protein_2.primary
dtype: string
splits:
- name: test
num_bytes: 5138954
num_examples: 4192
- name: train
num_bytes: 19964860
num_examples: 16768
download_size: 16427398
dataset_size: 25103814
---
# Dataset Card for "williams_mtb_hpidb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_80_1713208454 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3658136
num_examples: 9066
download_size: 1827029
dataset_size: 3658136
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tsuinzues/videl | ---
license: openrail
---
|
zgcarvalho/uniref50-test | ---
license: cc-by-4.0
size_categories: 10M<n<100M
pretty_name: UniRef50
tags:
- biology
- protein
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
splits:
- name: train
num_bytes: 15468741441.32825
num_examples: 49719601
- name: test
num_bytes: 3867185593.6717486
num_examples: 12429901
download_size: 18625264941
dataset_size: 19335927035.0
---
# Dataset Card for UniRef50
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/maidena_ange_futokunoguild | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Maidena Ange
This is the dataset of Maidena Ange, containing 220 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 220 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 542 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 220 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 220 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 220 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 220 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 220 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 542 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 542 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 542 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
cryptonation/nbahistory | ---
license: openrail
---
|
sr2077/BloombergQuint-llama2 | ---
license: pddl
---
|
SUSTech/prm800k-flatten | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: history
sequence: string
- name: problem
dtype: string
- name: completions
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 817748154
num_examples: 1003682
- name: test
num_bytes: 21389306
num_examples: 27222
download_size: 95254227
dataset_size: 839137460
---
# Dataset Card for "prm800k-flatten"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
316usman/thematic2d_rr | ---
dataset_info:
features:
- name: text
dtype: string
- name: document_url
dtype: string
- name: source_url
dtype: string
- name: num_tokens
dtype: int64
splits:
- name: train
num_bytes: 34951517.56298589
num_examples: 54831
download_size: 12735615
dataset_size: 34951517.56298589
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|