Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1869, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 578, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1885, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 597, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1392, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1041, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 999, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1740, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1896, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
print_report
bool
log_report
bool
overall
dict
warmup
dict
train
dict
{ "name": "cuda_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1323.945984, "max_global_vram": 68702.69952, "max_process_vram": 307917.320192, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.63124169921875, 0.0453546142578125, 0.04383636856079102, 0.04199604034423828, 0.040884994506835935 ], "count": 5, "total": 0.8033137168884277, "mean": 0.16066274337768555, "p50": 0.04383636856079102, "p90": 0.396886865234375, "p95": 0.5140642822265624, "p99": 0.6078062158203125, "stdev": 0.23529446049911415, "stdev_": 146.452410529294 }, "throughput": { "unit": "samples/s", "value": 62.24218378054224 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1323.945984, "max_global_vram": 68702.69952, "max_process_vram": 307917.320192, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.63124169921875, 0.0453546142578125 ], "count": 2, "total": 0.6765963134765625, "mean": 0.33829815673828123, "p50": 0.33829815673828123, "p90": 0.5726529907226562, "p95": 0.601947344970703, "p99": 0.6253828283691406, "stdev": 0.2929435424804687, "stdev_": 86.59330139569742 }, "throughput": { "unit": "samples/s", "value": 11.823889431045686 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1323.945984, "max_global_vram": 68702.69952, "max_process_vram": 307917.320192, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.04383636856079102, 0.04199604034423828, 0.040884994506835935 ], "count": 3, "total": 0.12671740341186524, "mean": 0.04223913447062175, "p50": 0.04199604034423828, "p90": 0.043468302917480474, "p95": 0.043652335739135746, "p99": 0.043799561996459964, "stdev": 0.001217093057877778, "stdev_": 2.8814346532699266 }, "throughput": { "unit": "samples/s", "value": 142.04836522332465 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1323.945984, "max_global_vram": 68702.69952, "max_process_vram": 307917.320192, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.63124169921875, 0.0453546142578125, 0.04383636856079102, 0.04199604034423828, 0.040884994506835935 ], "count": 5, "total": 0.8033137168884277, "mean": 0.16066274337768555, "p50": 0.04383636856079102, "p90": 0.396886865234375, "p95": 0.5140642822265624, "p99": 0.6078062158203125, "stdev": 0.23529446049911415, "stdev_": 146.452410529294 }, "throughput": { "unit": "samples/s", "value": 62.24218378054224 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1323.945984, "max_global_vram": 68702.69952, "max_process_vram": 307917.320192, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.63124169921875, 0.0453546142578125 ], "count": 2, "total": 0.6765963134765625, "mean": 0.33829815673828123, "p50": 0.33829815673828123, "p90": 0.5726529907226562, "p95": 0.601947344970703, "p99": 0.6253828283691406, "stdev": 0.2929435424804687, "stdev_": 86.59330139569742 }, "throughput": { "unit": "samples/s", "value": 11.823889431045686 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1323.945984, "max_global_vram": 68702.69952, "max_process_vram": 307917.320192, "max_reserved": 2497.708032, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "values": [ 0.04383636856079102, 0.04199604034423828, 0.040884994506835935 ], "count": 3, "total": 0.12671740341186524, "mean": 0.04223913447062175, "p50": 0.04199604034423828, "p90": 0.043468302917480474, "p95": 0.043652335739135746, "p99": 0.043799561996459964, "stdev": 0.001217093057877778, "stdev_": 2.8814346532699266 }, "throughput": { "unit": "samples/s", "value": 142.04836522332465 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1666.912256, "max_global_vram": 68702.69952, "max_process_vram": 390531.780608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.73303515625, 0.036401252746582034, 0.03779421615600586, 0.04240351104736328, 0.04264223098754883 ], "count": 5, "total": 0.8922763671875, "mean": 0.1784552734375, "p50": 0.04240351104736328, "p90": 0.4568779861450196, "p95": 0.5949565711975097, "p99": 0.705419439239502, "stdev": 0.2773009155409712, "stdev_": 155.38958877451702 }, "throughput": { "unit": "samples/s", "value": 56.03644995955963 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1666.912256, "max_global_vram": 68702.69952, "max_process_vram": 390531.780608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.73303515625, 0.036401252746582034 ], "count": 2, "total": 0.769436408996582, "mean": 0.384718204498291, "p50": 0.384718204498291, "p90": 0.6633717658996583, "p95": 0.6982034610748291, "p99": 0.7260688172149659, "stdev": 0.348316951751709, "stdev_": 90.53820372393017 }, "throughput": { "unit": "samples/s", "value": 10.397220493416418 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1666.912256, "max_global_vram": 68702.69952, "max_process_vram": 390531.780608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.03779421615600586, 0.04240351104736328, 0.04264223098754883 ], "count": 3, "total": 0.12283995819091797, "mean": 0.04094665273030599, "p50": 0.04240351104736328, "p90": 0.04259448699951172, "p95": 0.042618358993530274, "p99": 0.04263745658874512, "stdev": 0.002231238679702286, "stdev_": 5.449135719098405 }, "throughput": { "unit": "samples/s", "value": 146.53212411570823 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1666.912256, "max_global_vram": 68702.69952, "max_process_vram": 390531.780608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.73303515625, 0.036401252746582034, 0.03779421615600586, 0.04240351104736328, 0.04264223098754883 ], "count": 5, "total": 0.8922763671875, "mean": 0.1784552734375, "p50": 0.04240351104736328, "p90": 0.4568779861450196, "p95": 0.5949565711975097, "p99": 0.705419439239502, "stdev": 0.2773009155409712, "stdev_": 155.38958877451702 }, "throughput": { "unit": "samples/s", "value": 56.03644995955963 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1666.912256, "max_global_vram": 68702.69952, "max_process_vram": 390531.780608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.73303515625, 0.036401252746582034 ], "count": 2, "total": 0.769436408996582, "mean": 0.384718204498291, "p50": 0.384718204498291, "p90": 0.6633717658996583, "p95": 0.6982034610748291, "p99": 0.7260688172149659, "stdev": 0.348316951751709, "stdev_": 90.53820372393017 }, "throughput": { "unit": "samples/s", "value": 10.397220493416418 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1666.912256, "max_global_vram": 68702.69952, "max_process_vram": 390531.780608, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "values": [ 0.03779421615600586, 0.04240351104736328, 0.04264223098754883 ], "count": 3, "total": 0.12283995819091797, "mean": 0.04094665273030599, "p50": 0.04240351104736328, "p90": 0.04259448699951172, "p95": 0.042618358993530274, "p99": 0.04263745658874512, "stdev": 0.002231238679702286, "stdev_": 5.449135719098405 }, "throughput": { "unit": "samples/s", "value": 146.53212411570823 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1332.3264, "max_global_vram": 68702.69952, "max_process_vram": 297091.710976, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.6690802612304687, 0.0466182861328125, 0.045612041473388674, 0.04551459884643555, 0.045711402893066404 ], "count": 5, "total": 0.8525365905761718, "mean": 0.17050731811523437, "p50": 0.045711402893066404, "p90": 0.4200954711914063, "p95": 0.5445878662109374, "p99": 0.6441817822265624, "stdev": 0.2492867835670069, "stdev_": 146.20298197320236 }, "throughput": { "unit": "samples/s", "value": 58.64850911115543 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1332.3264, "max_global_vram": 68702.69952, "max_process_vram": 297091.710976, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.6690802612304687, 0.0466182861328125 ], "count": 2, "total": 0.7156985473632812, "mean": 0.3578492736816406, "p50": 0.3578492736816406, "p90": 0.6068340637207031, "p95": 0.6379571624755859, "p99": 0.6628556414794922, "stdev": 0.3112309875488281, "stdev_": 86.9726475470546 }, "throughput": { "unit": "samples/s", "value": 11.177890509171709 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1332.3264, "max_global_vram": 68702.69952, "max_process_vram": 297091.710976, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.045612041473388674, 0.04551459884643555, 0.045711402893066404 ], "count": 3, "total": 0.13683804321289064, "mean": 0.04561268107096355, "p50": 0.045612041473388674, "p90": 0.04569153060913086, "p95": 0.04570146675109863, "p99": 0.04570941566467285, "stdev": 0.00008034618848608875, "stdev_": 0.17614879590411997 }, "throughput": { "unit": "samples/s", "value": 131.54236627014507 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1332.3264, "max_global_vram": 68702.69952, "max_process_vram": 297091.710976, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.6690802612304687, 0.0466182861328125, 0.045612041473388674, 0.04551459884643555, 0.045711402893066404 ], "count": 5, "total": 0.8525365905761718, "mean": 0.17050731811523437, "p50": 0.045711402893066404, "p90": 0.4200954711914063, "p95": 0.5445878662109374, "p99": 0.6441817822265624, "stdev": 0.2492867835670069, "stdev_": 146.20298197320236 }, "throughput": { "unit": "samples/s", "value": 58.64850911115543 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1332.3264, "max_global_vram": 68702.69952, "max_process_vram": 297091.710976, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.6690802612304687, 0.0466182861328125 ], "count": 2, "total": 0.7156985473632812, "mean": 0.3578492736816406, "p50": 0.3578492736816406, "p90": 0.6068340637207031, "p95": 0.6379571624755859, "p99": 0.6628556414794922, "stdev": 0.3112309875488281, "stdev_": 86.9726475470546 }, "throughput": { "unit": "samples/s", "value": 11.177890509171709 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1332.3264, "max_global_vram": 68702.69952, "max_process_vram": 297091.710976, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "values": [ 0.045612041473388674, 0.04551459884643555, 0.045711402893066404 ], "count": 3, "total": 0.13683804321289064, "mean": 0.04561268107096355, "p50": 0.045612041473388674, "p90": 0.04569153060913086, "p95": 0.04570146675109863, "p99": 0.04570941566467285, "stdev": 0.00008034618848608875, "stdev_": 0.17614879590411997 }, "throughput": { "unit": "samples/s", "value": 131.54236627014507 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1326.923776, "max_global_vram": 68702.69952, "max_process_vram": 330356.645888, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.6141828002929688, 0.05372709655761719, 0.0780350341796875, 0.04707426834106445, 0.04599682235717773 ], "count": 5, "total": 0.8390160217285156, "mean": 0.16780320434570312, "p50": 0.05372709655761719, "p90": 0.3997236938476563, "p95": 0.5069532470703124, "p99": 0.5927368896484375, "stdev": 0.22348990899055132, "stdev_": 133.18572184719676 }, "throughput": { "unit": "samples/s", "value": 59.593617648673145 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1326.923776, "max_global_vram": 68702.69952, "max_process_vram": 330356.645888, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.6141828002929688, 0.05372709655761719 ], "count": 2, "total": 0.667909896850586, "mean": 0.333954948425293, "p50": 0.333954948425293, "p90": 0.5581372299194336, "p95": 0.5861600151062012, "p99": 0.6085782432556153, "stdev": 0.2802278518676758, "stdev_": 83.91187290053402 }, "throughput": { "unit": "samples/s", "value": 11.977663510785844 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1326.923776, "max_global_vram": 68702.69952, "max_process_vram": 330356.645888, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.0780350341796875, 0.04707426834106445, 0.04599682235717773 ], "count": 3, "total": 0.17110612487792967, "mean": 0.05703537495930989, "p50": 0.04707426834106445, "p90": 0.0718428810119629, "p95": 0.0749389575958252, "p99": 0.07741581886291504, "stdev": 0.01485551498021394, "stdev_": 26.046142399891547 }, "throughput": { "unit": "samples/s", "value": 105.19787069481902 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1326.923776, "max_global_vram": 68702.69952, "max_process_vram": 330356.645888, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.6141828002929688, 0.05372709655761719, 0.0780350341796875, 0.04707426834106445, 0.04599682235717773 ], "count": 5, "total": 0.8390160217285156, "mean": 0.16780320434570312, "p50": 0.05372709655761719, "p90": 0.3997236938476563, "p95": 0.5069532470703124, "p99": 0.5927368896484375, "stdev": 0.22348990899055132, "stdev_": 133.18572184719676 }, "throughput": { "unit": "samples/s", "value": 59.593617648673145 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1326.923776, "max_global_vram": 68702.69952, "max_process_vram": 330356.645888, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.6141828002929688, 0.05372709655761719 ], "count": 2, "total": 0.667909896850586, "mean": 0.333954948425293, "p50": 0.333954948425293, "p90": 0.5581372299194336, "p95": 0.5861600151062012, "p99": 0.6085782432556153, "stdev": 0.2802278518676758, "stdev_": 83.91187290053402 }, "throughput": { "unit": "samples/s", "value": 11.977663510785844 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1326.923776, "max_global_vram": 68702.69952, "max_process_vram": 330356.645888, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "values": [ 0.0780350341796875, 0.04707426834106445, 0.04599682235717773 ], "count": 3, "total": 0.17110612487792967, "mean": 0.05703537495930989, "p50": 0.04707426834106445, "p90": 0.0718428810119629, "p95": 0.0749389575958252, "p99": 0.07741581886291504, "stdev": 0.01485551498021394, "stdev_": 26.046142399891547 }, "throughput": { "unit": "samples/s", "value": 105.19787069481902 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1345.29024, "max_global_vram": 68702.69952, "max_process_vram": 371563.76576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6854241333007812, 0.04177446365356445, 0.0409205436706543, 0.04336630630493164, 0.04333718872070313 ], "count": 5, "total": 0.8548226356506348, "mean": 0.17096452713012694, "p50": 0.04333718872070313, "p90": 0.42860100250244143, "p95": 0.5570125679016111, "p99": 0.6597418202209472, "stdev": 0.25723150661751354, "stdev_": 150.45899341547374 }, "throughput": { "unit": "samples/s", "value": 58.49166589036718 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1345.29024, "max_global_vram": 68702.69952, "max_process_vram": 371563.76576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6854241333007812, 0.04177446365356445 ], "count": 2, "total": 0.7271985969543456, "mean": 0.3635992984771728, "p50": 0.3635992984771728, "p90": 0.6210591663360595, "p95": 0.6532416498184204, "p99": 0.678987636604309, "stdev": 0.32182483482360835, "stdev_": 88.51085141568635 }, "throughput": { "unit": "samples/s", "value": 11.001121335362326 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1345.29024, "max_global_vram": 68702.69952, "max_process_vram": 371563.76576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.0409205436706543, 0.04336630630493164, 0.04333718872070313 ], "count": 3, "total": 0.12762403869628908, "mean": 0.04254134623209636, "p50": 0.04333718872070313, "p90": 0.04336048278808594, "p95": 0.04336339454650879, "p99": 0.04336572395324707, "stdev": 0.0011461421278389156, "stdev_": 2.6941839630222626 }, "throughput": { "unit": "samples/s", "value": 141.039260188554 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1345.29024, "max_global_vram": 68702.69952, "max_process_vram": 371563.76576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6854241333007812, 0.04177446365356445, 0.0409205436706543, 0.04336630630493164, 0.04333718872070313 ], "count": 5, "total": 0.8548226356506348, "mean": 0.17096452713012694, "p50": 0.04333718872070313, "p90": 0.42860100250244143, "p95": 0.5570125679016111, "p99": 0.6597418202209472, "stdev": 0.25723150661751354, "stdev_": 150.45899341547374 }, "throughput": { "unit": "samples/s", "value": 58.49166589036718 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1345.29024, "max_global_vram": 68702.69952, "max_process_vram": 371563.76576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.6854241333007812, 0.04177446365356445 ], "count": 2, "total": 0.7271985969543456, "mean": 0.3635992984771728, "p50": 0.3635992984771728, "p90": 0.6210591663360595, "p95": 0.6532416498184204, "p99": 0.678987636604309, "stdev": 0.32182483482360835, "stdev_": 88.51085141568635 }, "throughput": { "unit": "samples/s", "value": 11.001121335362326 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1345.29024, "max_global_vram": 68702.69952, "max_process_vram": 371563.76576, "max_reserved": 2894.06976, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "values": [ 0.0409205436706543, 0.04336630630493164, 0.04333718872070313 ], "count": 3, "total": 0.12762403869628908, "mean": 0.04254134623209636, "p50": 0.04333718872070313, "p90": 0.04336048278808594, "p95": 0.04336339454650879, "p99": 0.04336572395324707, "stdev": 0.0011461421278389156, "stdev_": 2.6941839630222626 }, "throughput": { "unit": "samples/s", "value": 141.039260188554 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1348.337664, "max_global_vram": 68702.69952, "max_process_vram": 424925.315072, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.6422979125976562, 0.07310539245605469, 0.0723471450805664, 0.072256103515625, 0.07220970153808594 ], "count": 5, "total": 0.9322162551879883, "mean": 0.18644325103759768, "p50": 0.0723471450805664, "p90": 0.4146209045410157, "p95": 0.5284594085693358, "p99": 0.6195302117919922, "stdev": 0.22792756416848997, "stdev_": 122.25036996513576 }, "throughput": { "unit": "samples/s", "value": 53.63562341006071 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1348.337664, "max_global_vram": 68702.69952, "max_process_vram": 424925.315072, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.6422979125976562, 0.07310539245605469 ], "count": 2, "total": 0.715403305053711, "mean": 0.3577016525268555, "p50": 0.3577016525268555, "p90": 0.5853786605834961, "p95": 0.6138382865905762, "p99": 0.6366059873962402, "stdev": 0.2845962600708008, "stdev_": 79.56246722942771 }, "throughput": { "unit": "samples/s", "value": 11.1825035521738 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1348.337664, "max_global_vram": 68702.69952, "max_process_vram": 424925.315072, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.0723471450805664, 0.072256103515625, 0.07220970153808594 ], "count": 3, "total": 0.21681295013427732, "mean": 0.07227098337809244, "p50": 0.072256103515625, "p90": 0.07232893676757812, "p95": 0.07233804092407226, "p99": 0.07234532424926757, "stdev": 0.00005708905074636964, "stdev_": 0.07899304544910217 }, "throughput": { "unit": "samples/s", "value": 83.02087116499351 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.3.1+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "6", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "warn", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082014.490624, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-122-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]", "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 8, "gpu_vram_mb": 549621596160, "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": null, "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": "0.13.2", "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1348.337664, "max_global_vram": 68702.69952, "max_process_vram": 424925.315072, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.6422979125976562, 0.07310539245605469, 0.0723471450805664, 0.072256103515625, 0.07220970153808594 ], "count": 5, "total": 0.9322162551879883, "mean": 0.18644325103759768, "p50": 0.0723471450805664, "p90": 0.4146209045410157, "p95": 0.5284594085693358, "p99": 0.6195302117919922, "stdev": 0.22792756416848997, "stdev_": 122.25036996513576 }, "throughput": { "unit": "samples/s", "value": 53.63562341006071 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1348.337664, "max_global_vram": 68702.69952, "max_process_vram": 424925.315072, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.6422979125976562, 0.07310539245605469 ], "count": 2, "total": 0.715403305053711, "mean": 0.3577016525268555, "p50": 0.3577016525268555, "p90": 0.5853786605834961, "p95": 0.6138382865905762, "p99": 0.6366059873962402, "stdev": 0.2845962600708008, "stdev_": 79.56246722942771 }, "throughput": { "unit": "samples/s", "value": 11.1825035521738 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1348.337664, "max_global_vram": 68702.69952, "max_process_vram": 424925.315072, "max_reserved": 3919.577088, "max_allocated": 3695.353344 }, "latency": { "unit": "s", "values": [ 0.0723471450805664, 0.072256103515625, 0.07220970153808594 ], "count": 3, "total": 0.21681295013427732, "mean": 0.07227098337809244, "p50": 0.072256103515625, "p90": 0.07232893676757812, "p95": 0.07233804092407226, "p99": 0.07234532424926757, "stdev": 0.00005708905074636964, "stdev_": 0.07899304544910217 }, "throughput": { "unit": "samples/s", "value": 83.02087116499351 }, "energy": null, "efficiency": null }

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
1,030