results
dict
group_subtasks
dict
configs
dict
versions
dict
n-shot
dict
higher_is_better
dict
n-samples
dict
config
dict
git_hash
null
date
float64
1.73B
1.73B
pretty_env_info
stringclasses
1 value
transformers_version
stringclasses
1 value
upper_git_hash
null
tokenizer_pad_token
sequencelengths
2
2
tokenizer_eos_token
sequencelengths
2
2
tokenizer_bos_token
sequencelengths
2
2
eot_token_id
int64
0
0
max_length
int64
2.05k
2.05k
task_hashes
dict
model_source
stringclasses
1 value
model_name
stringclasses
1 value
model_name_sanitized
stringclasses
1 value
system_instruction
null
system_instruction_sha
null
fewshot_as_multiturn
bool
1 class
chat_template
null
chat_template_sha
null
start_time
float64
2.61k
3.06k
end_time
float64
2.92k
3.37k
total_evaluation_time_seconds
stringclasses
2 values
{ "hellaswag": { "alias": "hellaswag", "acc,none": 0.28719378609838675, "acc_stderr,none": 0.004515280911468785, "acc_norm,none": 0.3082055367456682, "acc_norm_stderr,none": 0.004608082815535459 } }
{ "hellaswag": [] }
{ "hellaswag": { "task": "hellaswag", "tag": [ "multiple_choice" ], "dataset_path": "hellaswag", "dataset_kwargs": { "trust_remote_code": true }, "training_split": "train", "validation_split": "validation", "process_docs": "def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:\n def _process_doc(doc):\n ctx = doc[\"ctx_a\"] + \" \" + doc[\"ctx_b\"].capitalize()\n out_doc = {\n \"query\": preprocess(doc[\"activity_label\"] + \": \" + ctx),\n \"choices\": [preprocess(ending) for ending in doc[\"endings\"]],\n \"gold\": int(doc[\"label\"]),\n }\n return out_doc\n\n return dataset.map(_process_doc)\n", "doc_to_text": "{{query}}", "doc_to_target": "{{label}}", "doc_to_choice": "choices", "description": "", "target_delimiter": " ", "fewshot_delimiter": "\n\n", "num_fewshot": 0, "metric_list": [ { "metric": "acc", "aggregation": "mean", "higher_is_better": true }, { "metric": "acc_norm", "aggregation": "mean", "higher_is_better": true } ], "output_type": "multiple_choice", "repeats": 1, "should_decontaminate": false, "metadata": { "version": 1 } } }
{ "hellaswag": 1 }
{ "hellaswag": 0 }
{ "hellaswag": { "acc": true, "acc_norm": true } }
{ "hellaswag": { "original": 10042, "effective": 10042 } }
{ "model": "hf", "model_args": "pretrained=EleutherAI/pythia-160m,revision=step100000,dtype=float", "model_num_parameters": 162322944, "model_dtype": "torch.float32", "model_revision": "step100000", "model_sha": "4081105d3b42adff0a82b8669cae69ed88dfbd38", "batch_size": "auto:4", "batch_sizes": [ 64, 64, 64, 64, 64 ], "device": "cuda", "use_cache": null, "limit": null, "bootstrap_iters": 100000, "gen_kwargs": null, "random_seed": 0, "numpy_seed": 1234, "torch_seed": 1234, "fewshot_seed": 1234 }
null
1,729,253,122.804515
PyTorch version: 2.4.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.30.4 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.1.85+-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 535.104.05 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.20GHz CPU family: 6 Model: 79 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 Stepping: 0 BogoMIPS: 4399.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB (1 instance) L1i cache: 32 KiB (1 instance) L2 cache: 256 KiB (1 instance) L3 cache: 55 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0,1 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled) Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] optree==0.13.0 [pip3] torch==2.4.1+cu121 [pip3] torchaudio==2.4.1+cu121 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.19.1+cu121 [conda] Could not collect
4.44.2
null
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
0
2,048
{ "hellaswag": "edcc7edd27a555d3f7cbca0641152b2c5e4eb6eb79c5e62d7fe5887f47814323" }
hf
EleutherAI/pythia-160m
EleutherAI__pythia-160m
null
null
false
null
null
2,612.058862
2,922.27257
310.21370772399996
{ "hellaswag": { "alias": "hellaswag", "acc,none": 0.28719378609838675, "acc_stderr,none": 0.004515280911468785, "acc_norm,none": 0.3082055367456682, "acc_norm_stderr,none": 0.004608082815535459 } }
{ "hellaswag": [] }
{ "hellaswag": { "task": "hellaswag", "tag": [ "multiple_choice" ], "dataset_path": "hellaswag", "dataset_kwargs": { "trust_remote_code": true }, "training_split": "train", "validation_split": "validation", "process_docs": "def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:\n def _process_doc(doc):\n ctx = doc[\"ctx_a\"] + \" \" + doc[\"ctx_b\"].capitalize()\n out_doc = {\n \"query\": preprocess(doc[\"activity_label\"] + \": \" + ctx),\n \"choices\": [preprocess(ending) for ending in doc[\"endings\"]],\n \"gold\": int(doc[\"label\"]),\n }\n return out_doc\n\n return dataset.map(_process_doc)\n", "doc_to_text": "{{query}}", "doc_to_target": "{{label}}", "doc_to_choice": "choices", "description": "", "target_delimiter": " ", "fewshot_delimiter": "\n\n", "num_fewshot": 0, "metric_list": [ { "metric": "acc", "aggregation": "mean", "higher_is_better": true }, { "metric": "acc_norm", "aggregation": "mean", "higher_is_better": true } ], "output_type": "multiple_choice", "repeats": 1, "should_decontaminate": false, "metadata": { "version": 1 } } }
{ "hellaswag": 1 }
{ "hellaswag": 0 }
{ "hellaswag": { "acc": true, "acc_norm": true } }
{ "hellaswag": { "original": 10042, "effective": 10042 } }
{ "model": "hf", "model_args": "pretrained=EleutherAI/pythia-160m,revision=step100000,dtype=float", "model_num_parameters": 162322944, "model_dtype": "torch.float32", "model_revision": "step100000", "model_sha": "4081105d3b42adff0a82b8669cae69ed88dfbd38", "batch_size": "auto:4", "batch_sizes": [ 64, 64, 64, 64, 64 ], "device": "cuda", "use_cache": null, "limit": null, "bootstrap_iters": 100000, "gen_kwargs": null, "random_seed": 0, "numpy_seed": 1234, "torch_seed": 1234, "fewshot_seed": 1234 }
null
1,729,253,570.224266
PyTorch version: 2.4.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.30.4 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.1.85+-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 535.104.05 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.20GHz CPU family: 6 Model: 79 Thread(s) per core: 2 Core(s) per socket: 1 Socket(s): 1 Stepping: 0 BogoMIPS: 4399.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 32 KiB (1 instance) L1i cache: 32 KiB (1 instance) L2 cache: 256 KiB (1 instance) L3 cache: 55 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0,1 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable; SMT Host state unknown Vulnerability Meltdown: Vulnerable Vulnerability Mmio stale data: Vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Vulnerable Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled) Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] optree==0.13.0 [pip3] torch==2.4.1+cu121 [pip3] torchaudio==2.4.1+cu121 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.19.1+cu121 [conda] Could not collect
4.44.2
null
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
[ "<|endoftext|>", "0" ]
0
2,048
{ "hellaswag": "edcc7edd27a555d3f7cbca0641152b2c5e4eb6eb79c5e62d7fe5887f47814323" }
hf
EleutherAI/pythia-160m
EleutherAI__pythia-160m
null
null
false
null
null
3,058.06088
3,372.364999
314.3041183739997

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
3
Add dataset card