Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    DatasetGenerationError
Message:      An error occurred while generating the dataset
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1016, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1016, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1324, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 938, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
overall
dict
warmup
dict
train
dict
{ "name": "cuda_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1122.668544, "max_global_vram": 4468.34688, "max_process_vram": 297095.217152, "max_reserved": 2403.336192, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "count": 5, "total": 0.7484651947021486, "mean": 0.1496930389404297, "stdev": 0.2081257648913134, "p50": 0.04572518539428711, "p90": 0.3582818710327149, "p95": 0.46211212692260734, "p99": 0.5451763316345214, "values": [ 0.5659423828125, 0.04679110336303711, 0.04572518539428711, 0.04526006317138672, 0.0447464599609375 ] }, "throughput": { "unit": "samples/s", "value": 66.80337356221017 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1122.668544, "max_global_vram": 4468.34688, "max_process_vram": 297095.217152, "max_reserved": 2403.336192, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "count": 2, "total": 0.6127334861755371, "mean": 0.30636674308776857, "stdev": 0.25957563972473147, "p50": 0.30636674308776857, "p90": 0.5140272548675537, "p95": 0.5399848188400268, "p99": 0.5607508700180054, "values": [ 0.5659423828125, 0.04679110336303711 ] }, "throughput": { "unit": "samples/s", "value": 13.056247423220059 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1122.668544, "max_global_vram": 4468.34688, "max_process_vram": 297095.217152, "max_reserved": 2403.336192, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "count": 3, "total": 0.13573170852661132, "mean": 0.045243902842203775, "stdev": 0.00039972635277217897, "p50": 0.04526006317138672, "p90": 0.04563216094970703, "p95": 0.04567867317199707, "p99": 0.045715882949829104, "values": [ 0.04572518539428711, 0.04526006317138672, 0.0447464599609375 ] }, "throughput": { "unit": "samples/s", "value": 132.61455407430424 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1122.668544, "max_global_vram": 4468.34688, "max_process_vram": 297095.217152, "max_reserved": 2403.336192, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "count": 5, "total": 0.7484651947021486, "mean": 0.1496930389404297, "stdev": 0.2081257648913134, "p50": 0.04572518539428711, "p90": 0.3582818710327149, "p95": 0.46211212692260734, "p99": 0.5451763316345214, "values": [ 0.5659423828125, 0.04679110336303711, 0.04572518539428711, 0.04526006317138672, 0.0447464599609375 ] }, "throughput": { "unit": "samples/s", "value": 66.80337356221017 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1122.668544, "max_global_vram": 4468.34688, "max_process_vram": 297095.217152, "max_reserved": 2403.336192, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "count": 2, "total": 0.6127334861755371, "mean": 0.30636674308776857, "stdev": 0.25957563972473147, "p50": 0.30636674308776857, "p90": 0.5140272548675537, "p95": 0.5399848188400268, "p99": 0.5607508700180054, "values": [ 0.5659423828125, 0.04679110336303711 ] }, "throughput": { "unit": "samples/s", "value": 13.056247423220059 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1122.668544, "max_global_vram": 4468.34688, "max_process_vram": 297095.217152, "max_reserved": 2403.336192, "max_allocated": 2195.345408 }, "latency": { "unit": "s", "count": 3, "total": 0.13573170852661132, "mean": 0.045243902842203775, "stdev": 0.00039972635277217897, "p50": 0.04526006317138672, "p90": 0.04563216094970703, "p95": 0.04567867317199707, "p99": 0.045715882949829104, "values": [ 0.04572518539428711, 0.04526006317138672, 0.0447464599609375 ] }, "throughput": { "unit": "samples/s", "value": 132.61455407430424 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2194.419712, "max_global_vram": 3470.647296, "max_process_vram": 386554.208256, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "count": 5, "total": 3.277213871002197, "mean": 0.6554427742004394, "stdev": 1.2272021662841368, "p50": 0.04227624130249023, "p90": 1.883331108093262, "p95": 2.4965886497497554, "p99": 2.987194683074951, "values": [ 3.10984619140625, 0.04047639465332031, 0.04227624130249023, 0.04105656051635742, 0.043558483123779294 ] }, "throughput": { "unit": "samples/s", "value": 15.25686206884924 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2194.419712, "max_global_vram": 3470.647296, "max_process_vram": 386554.208256, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "count": 2, "total": 3.1503225860595703, "mean": 1.5751612930297851, "stdev": 1.534684898376465, "p50": 1.5751612930297851, "p90": 2.8029092117309573, "p95": 2.9563777015686035, "p99": 3.0791524934387207, "values": [ 3.10984619140625, 0.04047639465332031 ] }, "throughput": { "unit": "samples/s", "value": 2.539422481812066 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2194.419712, "max_global_vram": 3470.647296, "max_process_vram": 386554.208256, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "count": 3, "total": 0.12689128494262694, "mean": 0.042297094980875645, "stdev": 0.0010215120623561975, "p50": 0.04227624130249023, "p90": 0.043302034759521484, "p95": 0.04343025894165039, "p99": 0.04353283828735351, "values": [ 0.04227624130249023, 0.04105656051635742, 0.043558483123779294 ] }, "throughput": { "unit": "samples/s", "value": 141.85371365841698 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2194.419712, "max_global_vram": 3470.647296, "max_process_vram": 386554.208256, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "count": 5, "total": 3.277213871002197, "mean": 0.6554427742004394, "stdev": 1.2272021662841368, "p50": 0.04227624130249023, "p90": 1.883331108093262, "p95": 2.4965886497497554, "p99": 2.987194683074951, "values": [ 3.10984619140625, 0.04047639465332031, 0.04227624130249023, 0.04105656051635742, 0.043558483123779294 ] }, "throughput": { "unit": "samples/s", "value": 15.25686206884924 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2194.419712, "max_global_vram": 3470.647296, "max_process_vram": 386554.208256, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "count": 2, "total": 3.1503225860595703, "mean": 1.5751612930297851, "stdev": 1.534684898376465, "p50": 1.5751612930297851, "p90": 2.8029092117309573, "p95": 2.9563777015686035, "p99": 3.0791524934387207, "values": [ 3.10984619140625, 0.04047639465332031 ] }, "throughput": { "unit": "samples/s", "value": 2.539422481812066 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2194.419712, "max_global_vram": 3470.647296, "max_process_vram": 386554.208256, "max_reserved": 1935.671296, "max_allocated": 1738.272256 }, "latency": { "unit": "s", "count": 3, "total": 0.12689128494262694, "mean": 0.042297094980875645, "stdev": 0.0010215120623561975, "p50": 0.04227624130249023, "p90": 0.043302034759521484, "p95": 0.04343025894165039, "p99": 0.04353283828735351, "values": [ 0.04227624130249023, 0.04105656051635742, 0.043558483123779294 ] }, "throughput": { "unit": "samples/s", "value": 141.85371365841698 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1123.115008, "max_global_vram": 4670.468096, "max_process_vram": 295035.65824, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "count": 5, "total": 0.7899723854064942, "mean": 0.15799447708129882, "stdev": 0.21986792437456745, "p50": 0.04801991271972656, "p90": 0.3780925994873048, "p95": 0.48791122894287103, "p99": 0.5757661325073242, "values": [ 0.5977298583984375, 0.04863671112060547, 0.047656871795654296, 0.04801991271972656, 0.047929031372070316 ] }, "throughput": { "unit": "samples/s", "value": 63.293351671111424 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1123.115008, "max_global_vram": 4670.468096, "max_process_vram": 295035.65824, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "count": 2, "total": 0.646366569519043, "mean": 0.3231832847595215, "stdev": 0.27454657363891605, "p50": 0.3231832847595215, "p90": 0.5428205436706544, "p95": 0.5702752010345459, "p99": 0.5922389269256592, "values": [ 0.5977298583984375, 0.04863671112060547 ] }, "throughput": { "unit": "samples/s", "value": 12.376877730469175 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1123.115008, "max_global_vram": 4670.468096, "max_process_vram": 295035.65824, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "count": 3, "total": 0.1436058158874512, "mean": 0.04786860529581707, "stdev": 0.00015424690414253482, "p50": 0.047929031372070316, "p90": 0.048001736450195315, "p95": 0.048010824584960934, "p99": 0.04801809509277344, "values": [ 0.047656871795654296, 0.04801991271972656, 0.047929031372070316 ] }, "throughput": { "unit": "samples/s", "value": 125.3431129426347 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1123.115008, "max_global_vram": 4670.468096, "max_process_vram": 295035.65824, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "count": 5, "total": 0.7899723854064942, "mean": 0.15799447708129882, "stdev": 0.21986792437456745, "p50": 0.04801991271972656, "p90": 0.3780925994873048, "p95": 0.48791122894287103, "p99": 0.5757661325073242, "values": [ 0.5977298583984375, 0.04863671112060547, 0.047656871795654296, 0.04801991271972656, 0.047929031372070316 ] }, "throughput": { "unit": "samples/s", "value": 63.293351671111424 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1123.115008, "max_global_vram": 4670.468096, "max_process_vram": 295035.65824, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "count": 2, "total": 0.646366569519043, "mean": 0.3231832847595215, "stdev": 0.27454657363891605, "p50": 0.3231832847595215, "p90": 0.5428205436706544, "p95": 0.5702752010345459, "p99": 0.5922389269256592, "values": [ 0.5977298583984375, 0.04863671112060547 ] }, "throughput": { "unit": "samples/s", "value": 12.376877730469175 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1123.115008, "max_global_vram": 4670.468096, "max_process_vram": 295035.65824, "max_reserved": 2707.423232, "max_allocated": 2497.88416 }, "latency": { "unit": "s", "count": 3, "total": 0.1436058158874512, "mean": 0.04786860529581707, "stdev": 0.00015424690414253482, "p50": 0.047929031372070316, "p90": 0.048001736450195315, "p95": 0.048010824584960934, "p99": 0.04801809509277344, "values": [ 0.047656871795654296, 0.04801991271972656, 0.047929031372070316 ] }, "throughput": { "unit": "samples/s", "value": 125.3431129426347 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1120.210944, "max_global_vram": 4553.142272, "max_process_vram": 295749.853184, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "count": 5, "total": 0.7799421653747558, "mean": 0.15598843307495117, "stdev": 0.21488782485710392, "p50": 0.04831608581542969, "p90": 0.3712529342651367, "p95": 0.47850793685913073, "p99": 0.5643119389343261, "values": [ 0.585762939453125, 0.049487926483154296, 0.04831304931640625, 0.04806216430664063, 0.04831608581542969 ] }, "throughput": { "unit": "samples/s", "value": 64.10731746497564 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1120.210944, "max_global_vram": 4553.142272, "max_process_vram": 295749.853184, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "count": 2, "total": 0.6352508659362792, "mean": 0.3176254329681396, "stdev": 0.26813750648498536, "p50": 0.3176254329681396, "p90": 0.5321354381561278, "p95": 0.5589491888046264, "p99": 0.5804001893234253, "values": [ 0.585762939453125, 0.049487926483154296 ] }, "throughput": { "unit": "samples/s", "value": 12.593449972254685 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1120.210944, "max_global_vram": 4553.142272, "max_process_vram": 295749.853184, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "count": 3, "total": 0.14469129943847658, "mean": 0.04823043314615886, "stdev": 0.0001189904949878732, "p50": 0.04831304931640625, "p90": 0.048315478515625, "p95": 0.04831578216552734, "p99": 0.04831602508544922, "values": [ 0.04831304931640625, 0.04806216430664063, 0.04831608581542969 ] }, "throughput": { "unit": "samples/s", "value": 124.40278074670056 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1120.210944, "max_global_vram": 4553.142272, "max_process_vram": 295749.853184, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "count": 5, "total": 0.7799421653747558, "mean": 0.15598843307495117, "stdev": 0.21488782485710392, "p50": 0.04831608581542969, "p90": 0.3712529342651367, "p95": 0.47850793685913073, "p99": 0.5643119389343261, "values": [ 0.585762939453125, 0.049487926483154296, 0.04831304931640625, 0.04806216430664063, 0.04831608581542969 ] }, "throughput": { "unit": "samples/s", "value": 64.10731746497564 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1120.210944, "max_global_vram": 4553.142272, "max_process_vram": 295749.853184, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "count": 2, "total": 0.6352508659362792, "mean": 0.3176254329681396, "stdev": 0.26813750648498536, "p50": 0.3176254329681396, "p90": 0.5321354381561278, "p95": 0.5589491888046264, "p99": 0.5804001893234253, "values": [ 0.585762939453125, 0.049487926483154296 ] }, "throughput": { "unit": "samples/s", "value": 12.593449972254685 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1120.210944, "max_global_vram": 4553.142272, "max_process_vram": 295749.853184, "max_reserved": 2707.423232, "max_allocated": 2497.900032 }, "latency": { "unit": "s", "count": 3, "total": 0.14469129943847658, "mean": 0.04823043314615886, "stdev": 0.0001189904949878732, "p50": 0.04831304931640625, "p90": 0.048315478515625, "p95": 0.04831578216552734, "p99": 0.04831602508544922, "values": [ 0.04831304931640625, 0.04806216430664063, 0.04831608581542969 ] }, "throughput": { "unit": "samples/s", "value": 124.40278074670056 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1129.132032, "max_global_vram": 4845.105152, "max_process_vram": 320620.445696, "max_reserved": 2738.880512, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "count": 5, "total": 0.7559626884460449, "mean": 0.15119253768920898, "stdev": 0.210125697510593, "p50": 0.04632918930053711, "p90": 0.36189817962646487, "p95": 0.46666933517456044, "p99": 0.5504862596130371, "values": [ 0.5714404907226562, 0.047584712982177736, 0.04552358627319336, 0.04508470916748047, 0.04632918930053711 ] }, "throughput": { "unit": "samples/s", "value": 66.1408304459839 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1129.132032, "max_global_vram": 4845.105152, "max_process_vram": 320620.445696, "max_reserved": 2738.880512, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "count": 2, "total": 0.6190252037048339, "mean": 0.30951260185241697, "stdev": 0.2619278888702392, "p50": 0.30951260185241697, "p90": 0.5190549129486084, "p95": 0.5452477018356323, "p99": 0.5662019329452515, "values": [ 0.5714404907226562, 0.047584712982177736 ] }, "throughput": { "unit": "samples/s", "value": 12.923544876881284 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1129.132032, "max_global_vram": 4845.105152, "max_process_vram": 320620.445696, "max_reserved": 2738.880512, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "count": 3, "total": 0.13693748474121095, "mean": 0.045645828247070315, "stdev": 0.0005153574976177015, "p50": 0.04552358627319336, "p90": 0.04616806869506836, "p95": 0.046248628997802736, "p99": 0.04631307723999024, "values": [ 0.04552358627319336, 0.04508470916748047, 0.04632918930053711 ] }, "throughput": { "unit": "samples/s", "value": 131.4468425794223 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1129.132032, "max_global_vram": 4845.105152, "max_process_vram": 320620.445696, "max_reserved": 2738.880512, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "count": 5, "total": 0.7559626884460449, "mean": 0.15119253768920898, "stdev": 0.210125697510593, "p50": 0.04632918930053711, "p90": 0.36189817962646487, "p95": 0.46666933517456044, "p99": 0.5504862596130371, "values": [ 0.5714404907226562, 0.047584712982177736, 0.04552358627319336, 0.04508470916748047, 0.04632918930053711 ] }, "throughput": { "unit": "samples/s", "value": 66.1408304459839 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1129.132032, "max_global_vram": 4845.105152, "max_process_vram": 320620.445696, "max_reserved": 2738.880512, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "count": 2, "total": 0.6190252037048339, "mean": 0.30951260185241697, "stdev": 0.2619278888702392, "p50": 0.30951260185241697, "p90": 0.5190549129486084, "p95": 0.5452477018356323, "p99": 0.5662019329452515, "values": [ 0.5714404907226562, 0.047584712982177736 ] }, "throughput": { "unit": "samples/s", "value": 12.923544876881284 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1129.132032, "max_global_vram": 4845.105152, "max_process_vram": 320620.445696, "max_reserved": 2738.880512, "max_allocated": 2506.73664 }, "latency": { "unit": "s", "count": 3, "total": 0.13693748474121095, "mean": 0.045645828247070315, "stdev": 0.0005153574976177015, "p50": 0.04552358627319336, "p90": 0.04616806869506836, "p95": 0.046248628997802736, "p99": 0.04631307723999024, "values": [ 0.04552358627319336, 0.04508470916748047, 0.04632918930053711 ] }, "throughput": { "unit": "samples/s", "value": 131.4468425794223 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1142.00576, "max_global_vram": 6142.124032, "max_process_vram": 430256.201728, "max_reserved": 3919.577088, "max_allocated": 3698.499072 }, "latency": { "unit": "s", "count": 5, "total": 0.9550322647094727, "mean": 0.19100645294189453, "stdev": 0.2183476907711566, "p50": 0.08156224060058594, "p90": 0.4100326690673829, "p95": 0.5188654251098632, "p99": 0.6059316299438476, "values": [ 0.6276981811523438, 0.08156224060058594, 0.08111984252929688, 0.08111759948730468, 0.08353440093994141 ] }, "throughput": { "unit": "samples/s", "value": 52.35425215210959 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1142.00576, "max_global_vram": 6142.124032, "max_process_vram": 430256.201728, "max_reserved": 3919.577088, "max_allocated": 3698.499072 }, "latency": { "unit": "s", "count": 2, "total": 0.7092604217529297, "mean": 0.35463021087646485, "stdev": 0.2730679702758789, "p50": 0.35463021087646485, "p90": 0.573084587097168, "p95": 0.6003913841247559, "p99": 0.6222368217468262, "values": [ 0.6276981811523438, 0.08156224060058594 ] }, "throughput": { "unit": "samples/s", "value": 11.279354881001373 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1142.00576, "max_global_vram": 6142.124032, "max_process_vram": 430256.201728, "max_reserved": 3919.577088, "max_allocated": 3698.499072 }, "latency": { "unit": "s", "count": 3, "total": 0.24577184295654297, "mean": 0.08192394765218099, "stdev": 0.0011387628087397024, "p50": 0.08111984252929688, "p90": 0.0830514892578125, "p95": 0.08329294509887696, "p99": 0.08348610977172852, "values": [ 0.08111984252929688, 0.08111759948730468, 0.08353440093994141 ] }, "throughput": { "unit": "samples/s", "value": 73.23865819398496 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.2.2+rocm5.7", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 128, "cpu_ram_mb": 1082015.236096, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.15.0-84-generic-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "Advanced Micro Devices, Inc. [AMD/ATI]" ], "gpu_count": 1, "gpu_vram_mb": 68702699520, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": "347e13ca9f7f904f55669603cfb9f0b6c7e8672c", "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1142.00576, "max_global_vram": 6142.124032, "max_process_vram": 430256.201728, "max_reserved": 3919.577088, "max_allocated": 3698.499072 }, "latency": { "unit": "s", "count": 5, "total": 0.9550322647094727, "mean": 0.19100645294189453, "stdev": 0.2183476907711566, "p50": 0.08156224060058594, "p90": 0.4100326690673829, "p95": 0.5188654251098632, "p99": 0.6059316299438476, "values": [ 0.6276981811523438, 0.08156224060058594, 0.08111984252929688, 0.08111759948730468, 0.08353440093994141 ] }, "throughput": { "unit": "samples/s", "value": 52.35425215210959 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1142.00576, "max_global_vram": 6142.124032, "max_process_vram": 430256.201728, "max_reserved": 3919.577088, "max_allocated": 3698.499072 }, "latency": { "unit": "s", "count": 2, "total": 0.7092604217529297, "mean": 0.35463021087646485, "stdev": 0.2730679702758789, "p50": 0.35463021087646485, "p90": 0.573084587097168, "p95": 0.6003913841247559, "p99": 0.6222368217468262, "values": [ 0.6276981811523438, 0.08156224060058594 ] }, "throughput": { "unit": "samples/s", "value": 11.279354881001373 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1142.00576, "max_global_vram": 6142.124032, "max_process_vram": 430256.201728, "max_reserved": 3919.577088, "max_allocated": 3698.499072 }, "latency": { "unit": "s", "count": 3, "total": 0.24577184295654297, "mean": 0.08192394765218099, "stdev": 0.0011387628087397024, "p50": 0.08111984252929688, "p90": 0.0830514892578125, "p95": 0.08329294509887696, "p99": 0.08348610977172852, "values": [ 0.08111984252929688, 0.08111759948730468, 0.08353440093994141 ] }, "throughput": { "unit": "samples/s", "value": 73.23865819398496 }, "energy": null, "efficiency": null }

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card