Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 620, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1886, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 639, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
print_report
bool
log_report
bool
overall
dict
warmup
dict
train
dict
{ "name": "cpu_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2513.027072, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6867478300000016, 0.5613925220000056, 0.5822482469999954, 0.5643380349999916, 0.5637200859999894 ], "count": 5, "total": 2.9584467199999835, "mean": 0.5916893439999967, "p50": 0.5643380349999916, "p90": 0.6449479967999991, "p95": 0.6658479134000004, "p99": 0.6825678466800014, "stdev": 0.04811137286247493, "stdev_": 8.131187987471208 }, "throughput": { "unit": "samples/s", "value": 16.900760680253278 }, "energy": { "unit": "kWh", "cpu": 0.0001224525745722218, "ram": 0.000005118303604750447, "gpu": 0, "total": 0.00012757087817697225 }, "efficiency": { "unit": "samples/kWh", "value": 78387.79620320193 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2513.027072, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6867478300000016, 0.5613925220000056 ], "count": 2, "total": 1.2481403520000072, "mean": 0.6240701760000036, "p50": 0.6240701760000036, "p90": 0.674212299200002, "p95": 0.6804800646000018, "p99": 0.6854942769200016, "stdev": 0.06267765399999803, "stdev_": 10.04336634090293 }, "throughput": { "unit": "samples/s", "value": 6.40953558402377 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2513.027072, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5822482469999954, 0.5643380349999916, 0.5637200859999894 ], "count": 3, "total": 1.7103063679999764, "mean": 0.5701021226666588, "p50": 0.5643380349999916, "p90": 0.5786662045999946, "p95": 0.580457225799995, "p99": 0.5818900427599953, "stdev": 0.008592311194019356, "stdev_": 1.5071529910867072 }, "throughput": { "unit": "samples/s", "value": 10.52443020547781 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2513.027072, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6867478300000016, 0.5613925220000056, 0.5822482469999954, 0.5643380349999916, 0.5637200859999894 ], "count": 5, "total": 2.9584467199999835, "mean": 0.5916893439999967, "p50": 0.5643380349999916, "p90": 0.6449479967999991, "p95": 0.6658479134000004, "p99": 0.6825678466800014, "stdev": 0.04811137286247493, "stdev_": 8.131187987471208 }, "throughput": { "unit": "samples/s", "value": 16.900760680253278 }, "energy": { "unit": "kWh", "cpu": 0.0001224525745722218, "ram": 0.000005118303604750447, "gpu": 0, "total": 0.00012757087817697225 }, "efficiency": { "unit": "samples/kWh", "value": 78387.79620320193 } }
{ "memory": { "unit": "MB", "max_ram": 2513.027072, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6867478300000016, 0.5613925220000056 ], "count": 2, "total": 1.2481403520000072, "mean": 0.6240701760000036, "p50": 0.6240701760000036, "p90": 0.674212299200002, "p95": 0.6804800646000018, "p99": 0.6854942769200016, "stdev": 0.06267765399999803, "stdev_": 10.04336634090293 }, "throughput": { "unit": "samples/s", "value": 6.40953558402377 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2513.027072, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5822482469999954, 0.5643380349999916, 0.5637200859999894 ], "count": 3, "total": 1.7103063679999764, "mean": 0.5701021226666588, "p50": 0.5643380349999916, "p90": 0.5786662045999946, "p95": 0.580457225799995, "p99": 0.5818900427599953, "stdev": 0.008592311194019356, "stdev_": 1.5071529910867072 }, "throughput": { "unit": "samples/s", "value": 10.52443020547781 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "model": "google-bert/bert-base-uncased", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 2.738775005999969, "mean": 0.5477550011999938, "stdev": 0.03693447784258994, "p50": 0.5307143729999666, "p90": 0.5856752317999963, "p95": 0.6036043034000045, "p99": 0.6179475606800111, "values": [ 0.6215333750000127, 0.5307143729999666, 0.5270229210000252, 0.5276163199999928, 0.5318880169999716 ] }, "throughput": { "unit": "samples/s", "value": 18.25633719106628 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.1522477479999793, "mean": 0.5761238739999897, "stdev": 0.045409501000023056, "p50": 0.5761238739999897, "p90": 0.612451474800008, "p95": 0.6169924249000104, "p99": 0.6206251849800123, "values": [ 0.6215333750000127, 0.5307143729999666 ] }, "throughput": { "unit": "samples/s", "value": 6.942951300088064 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.5865272579999896, "mean": 0.5288424193333299, "stdev": 0.0021671455040491463, "p50": 0.5276163199999928, "p90": 0.5310336775999758, "p95": 0.5314608472999737, "p99": 0.531802583059972, "values": [ 0.5270229210000252, 0.5276163199999928, 0.5318880169999716 ] }, "throughput": { "unit": "samples/s", "value": 11.345534663357242 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2576.867328, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6791743779999706, 1.545716002000006, 1.536947755999961, 1.5470005720000017, 1.5499187930000176 ], "count": 5, "total": 7.858757500999957, "mean": 1.5717515001999913, "p50": 1.5470005720000017, "p90": 1.6274721439999893, "p95": 1.65332326099998, "p99": 1.6740041545999724, "stdev": 0.05388524473755815, "stdev_": 3.4283565010564163 }, "throughput": { "unit": "samples/s", "value": 6.362328904236827 }, "energy": { "unit": "kWh", "cpu": 0.0003172597059333331, "ram": 0.000013261893523525201, "gpu": 0, "total": 0.0003305215994568583 }, "efficiency": { "unit": "samples/kWh", "value": 30255.208786454095 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2576.867328, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6791743779999706, 1.545716002000006 ], "count": 2, "total": 3.2248903799999766, "mean": 1.6124451899999883, "p50": 1.6124451899999883, "p90": 1.6658285403999742, "p95": 1.6725014591999723, "p99": 1.677839794239971, "stdev": 0.06672918799998229, "stdev_": 4.138384883642635 }, "throughput": { "unit": "samples/s", "value": 2.480704475914638 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2576.867328, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.536947755999961, 1.5470005720000017, 1.5499187930000176 ], "count": 3, "total": 4.63386712099998, "mean": 1.54462237366666, "p50": 1.5470005720000017, "p90": 1.5493351488000144, "p95": 1.5496269709000159, "p99": 1.5498604285800173, "stdev": 0.005556007001359964, "stdev_": 0.35970002092945136 }, "throughput": { "unit": "samples/s", "value": 3.8844445751210133 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2576.867328, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6791743779999706, 1.545716002000006, 1.536947755999961, 1.5470005720000017, 1.5499187930000176 ], "count": 5, "total": 7.858757500999957, "mean": 1.5717515001999913, "p50": 1.5470005720000017, "p90": 1.6274721439999893, "p95": 1.65332326099998, "p99": 1.6740041545999724, "stdev": 0.05388524473755815, "stdev_": 3.4283565010564163 }, "throughput": { "unit": "samples/s", "value": 6.362328904236827 }, "energy": { "unit": "kWh", "cpu": 0.0003172597059333331, "ram": 0.000013261893523525201, "gpu": 0, "total": 0.0003305215994568583 }, "efficiency": { "unit": "samples/kWh", "value": 30255.208786454095 } }
{ "memory": { "unit": "MB", "max_ram": 2576.867328, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6791743779999706, 1.545716002000006 ], "count": 2, "total": 3.2248903799999766, "mean": 1.6124451899999883, "p50": 1.6124451899999883, "p90": 1.6658285403999742, "p95": 1.6725014591999723, "p99": 1.677839794239971, "stdev": 0.06672918799998229, "stdev_": 4.138384883642635 }, "throughput": { "unit": "samples/s", "value": 2.480704475914638 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2576.867328, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.536947755999961, 1.5470005720000017, 1.5499187930000176 ], "count": 3, "total": 4.63386712099998, "mean": 1.54462237366666, "p50": 1.5470005720000017, "p90": 1.5493351488000144, "p95": 1.5496269709000159, "p99": 1.5498604285800173, "stdev": 0.005556007001359964, "stdev_": 0.35970002092945136 }, "throughput": { "unit": "samples/s", "value": 3.8844445751210133 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "model": "google/vit-base-patch16-224", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 7.2970974209999895, "mean": 1.459419484199998, "stdev": 0.05210139006345095, "p50": 1.4401334369999859, "p90": 1.521250764199999, "p95": 1.5379663595999886, "p99": 1.5513388359199802, "values": [ 1.5546819549999782, 1.4241662819999874, 1.4401334369999859, 1.4711039780000306, 1.4070117690000075 ] }, "throughput": { "unit": "samples/s", "value": 6.852039532336137 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 2.9788482369999656, "mean": 1.4894241184999828, "stdev": 0.06525783649999539, "p50": 1.4894241184999828, "p90": 1.541630387699979, "p95": 1.5481561713499787, "p99": 1.5533767982699782, "values": [ 1.5546819549999782, 1.4241662819999874 ] }, "throughput": { "unit": "samples/s", "value": 2.6856017371522416 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 4.318249184000024, "mean": 1.4394163946666747, "stdev": 0.02617044676610726, "p50": 1.4401334369999859, "p90": 1.4649098698000216, "p95": 1.4680069239000262, "p99": 1.4704845671800297, "values": [ 1.4401334369999859, 1.4711039780000306, 1.4070117690000075 ] }, "throughput": { "unit": "samples/s", "value": 4.1683560241714614 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2909.523968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8377412009999716, 0.7360272379999628, 0.7313546490000249, 0.7299971639999967, 0.7409206730000051 ], "count": 5, "total": 3.776040924999961, "mean": 0.7552081849999922, "p50": 0.7360272379999628, "p90": 0.799012989799985, "p95": 0.8183770953999783, "p99": 0.833868379879973, "stdev": 0.04144472756958241, "stdev_": 5.48785465951787 }, "throughput": { "unit": "samples/s", "value": 13.241381911135012 }, "energy": { "unit": "kWh", "cpu": 0.00015432075182222036, "ram": 0.000006450520059759803, "gpu": 0, "total": 0.00016077127188198016 }, "efficiency": { "unit": "samples/kWh", "value": 62200.1672496617 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2909.523968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8377412009999716, 0.7360272379999628 ], "count": 2, "total": 1.5737684389999345, "mean": 0.7868842194999672, "p50": 0.7868842194999672, "p90": 0.8275698046999708, "p95": 0.8326555028499711, "p99": 0.8367240613699716, "stdev": 0.050856981500004395, "stdev_": 6.463083162643918 }, "throughput": { "unit": "samples/s", "value": 5.083339963968062 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2909.523968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7313546490000249, 0.7299971639999967, 0.7409206730000051 ], "count": 3, "total": 2.2022724860000267, "mean": 0.7340908286666755, "p50": 0.7313546490000249, "p90": 0.7390074682000091, "p95": 0.739964070600007, "p99": 0.7407293525200055, "stdev": 0.004861122750590336, "stdev_": 0.6621963605538517 }, "throughput": { "unit": "samples/s", "value": 8.173375508447315 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2909.523968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8377412009999716, 0.7360272379999628, 0.7313546490000249, 0.7299971639999967, 0.7409206730000051 ], "count": 5, "total": 3.776040924999961, "mean": 0.7552081849999922, "p50": 0.7360272379999628, "p90": 0.799012989799985, "p95": 0.8183770953999783, "p99": 0.833868379879973, "stdev": 0.04144472756958241, "stdev_": 5.48785465951787 }, "throughput": { "unit": "samples/s", "value": 13.241381911135012 }, "energy": { "unit": "kWh", "cpu": 0.00015432075182222036, "ram": 0.000006450520059759803, "gpu": 0, "total": 0.00016077127188198016 }, "efficiency": { "unit": "samples/kWh", "value": 62200.1672496617 } }
{ "memory": { "unit": "MB", "max_ram": 2909.523968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8377412009999716, 0.7360272379999628 ], "count": 2, "total": 1.5737684389999345, "mean": 0.7868842194999672, "p50": 0.7868842194999672, "p90": 0.8275698046999708, "p95": 0.8326555028499711, "p99": 0.8367240613699716, "stdev": 0.050856981500004395, "stdev_": 6.463083162643918 }, "throughput": { "unit": "samples/s", "value": 5.083339963968062 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2909.523968, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7313546490000249, 0.7299971639999967, 0.7409206730000051 ], "count": 3, "total": 2.2022724860000267, "mean": 0.7340908286666755, "p50": 0.7313546490000249, "p90": 0.7390074682000091, "p95": 0.739964070600007, "p99": 0.7407293525200055, "stdev": 0.004861122750590336, "stdev_": 0.6621963605538517 }, "throughput": { "unit": "samples/s", "value": 8.173375508447315 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 3.581090587999995, "mean": 0.716218117599999, "stdev": 0.043372798377969854, "p50": 0.697155070000008, "p90": 0.7645524997999928, "p95": 0.7826806403999967, "p99": 0.7971831528799999, "values": [ 0.8008087810000006, 0.710168077999981, 0.6928677590000234, 0.697155070000008, 0.680090899999982 ] }, "throughput": { "unit": "samples/s", "value": 13.962227084549829 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.5109768589999817, "mean": 0.7554884294999908, "stdev": 0.04532035150000979, "p50": 0.7554884294999908, "p90": 0.7917447106999986, "p95": 0.7962767458499996, "p99": 0.7999023739700004, "values": [ 0.8008087810000006, 0.710168077999981 ] }, "throughput": { "unit": "samples/s", "value": 5.294588035778844 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 2.0701137290000133, "mean": 0.6900379096666711, "stdev": 0.007248103654729414, "p50": 0.6928677590000234, "p90": 0.696297607800011, "p95": 0.6967263389000096, "p99": 0.6970693237800083, "values": [ 0.6928677590000234, 0.697155070000008, 0.680090899999982 ] }, "throughput": { "unit": "samples/s", "value": 8.695174447587021 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2878.480384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7486150109999699, 0.5950577870000302, 0.5782145300000252, 0.5858687499999746, 0.5849154519999615 ], "count": 5, "total": 3.0926715299999614, "mean": 0.6185343059999923, "p50": 0.5858687499999746, "p90": 0.687192121399994, "p95": 0.7179035661999819, "p99": 0.7424727220399723, "stdev": 0.06526114768742573, "stdev_": 10.550934209205616 }, "throughput": { "unit": "samples/s", "value": 16.16725200687595 }, "energy": { "unit": "kWh", "cpu": 0.00012771181593333332, "ram": 0.000005338214675560752, "gpu": 0, "total": 0.00013305003060889406 }, "efficiency": { "unit": "samples/kWh", "value": 75159.69710217808 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2878.480384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7486150109999699, 0.5950577870000302 ], "count": 2, "total": 1.343672798, "mean": 0.671836399, "p50": 0.671836399, "p90": 0.7332592885999759, "p95": 0.7409371497999728, "p99": 0.7470794387599704, "stdev": 0.07677861199996983, "stdev_": 11.428170922899017 }, "throughput": { "unit": "samples/s", "value": 5.953830435436113 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2878.480384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5782145300000252, 0.5858687499999746, 0.5849154519999615 ], "count": 3, "total": 1.7489987319999614, "mean": 0.5829995773333204, "p50": 0.5849154519999615, "p90": 0.585678090399972, "p95": 0.5857734201999734, "p99": 0.5858496840399744, "stdev": 0.0034058481817421256, "stdev_": 0.5841939366955815 }, "throughput": { "unit": "samples/s", "value": 10.291602658520624 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2878.480384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7486150109999699, 0.5950577870000302, 0.5782145300000252, 0.5858687499999746, 0.5849154519999615 ], "count": 5, "total": 3.0926715299999614, "mean": 0.6185343059999923, "p50": 0.5858687499999746, "p90": 0.687192121399994, "p95": 0.7179035661999819, "p99": 0.7424727220399723, "stdev": 0.06526114768742573, "stdev_": 10.550934209205616 }, "throughput": { "unit": "samples/s", "value": 16.16725200687595 }, "energy": { "unit": "kWh", "cpu": 0.00012771181593333332, "ram": 0.000005338214675560752, "gpu": 0, "total": 0.00013305003060889406 }, "efficiency": { "unit": "samples/kWh", "value": 75159.69710217808 } }
{ "memory": { "unit": "MB", "max_ram": 2878.480384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7486150109999699, 0.5950577870000302 ], "count": 2, "total": 1.343672798, "mean": 0.671836399, "p50": 0.671836399, "p90": 0.7332592885999759, "p95": 0.7409371497999728, "p99": 0.7470794387599704, "stdev": 0.07677861199996983, "stdev_": 11.428170922899017 }, "throughput": { "unit": "samples/s", "value": 5.953830435436113 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2878.480384, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5782145300000252, 0.5858687499999746, 0.5849154519999615 ], "count": 3, "total": 1.7489987319999614, "mean": 0.5829995773333204, "p50": 0.5849154519999615, "p90": 0.585678090399972, "p95": 0.5857734201999734, "p99": 0.5858496840399744, "stdev": 0.0034058481817421256, "stdev_": 0.5841939366955815 }, "throughput": { "unit": "samples/s", "value": 10.291602658520624 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 2.882509665999976, "mean": 0.5765019331999952, "stdev": 0.04978939696949581, "p50": 0.5569985249999831, "p90": 0.6300386333999881, "p95": 0.6525100941999881, "p99": 0.670487262839988, "values": [ 0.674981554999988, 0.5569985249999831, 0.5424031590000027, 0.5626242509999884, 0.5455021760000136 ] }, "throughput": { "unit": "samples/s", "value": 17.34599560576128 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.2319800799999712, "mean": 0.6159900399999856, "stdev": 0.05899151500000244, "p50": 0.6159900399999856, "p90": 0.6631832519999875, "p95": 0.6690824034999878, "p99": 0.673801724699988, "values": [ 0.674981554999988, 0.5569985249999831 ] }, "throughput": { "unit": "samples/s", "value": 6.493611487614465 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.6505295860000047, "mean": 0.5501765286666682, "stdev": 0.00889233078021606, "p50": 0.5455021760000136, "p90": 0.5591998359999935, "p95": 0.5609120434999909, "p99": 0.5622818094999888, "values": [ 0.5424031590000027, 0.5626242509999884, 0.5455021760000136 ] }, "throughput": { "unit": "samples/s", "value": 10.905590637500968 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2830.88896, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8607311080000102, 0.6582591099999888, 0.6595438680000143, 0.6438743109999905, 0.6589589189999856 ], "count": 5, "total": 3.4813673159999894, "mean": 0.6962734631999978, "p50": 0.6589589189999856, "p90": 0.7802562120000118, "p95": 0.820493660000011, "p99": 0.8526836184000104, "stdev": 0.08243605704553186, "stdev_": 11.839609205650985 }, "throughput": { "unit": "samples/s", "value": 14.362173095095535 }, "energy": { "unit": "kWh", "cpu": 0.00014304225701111133, "ram": 0.000005979151187898341, "gpu": 0, "total": 0.00014902140819900966 }, "efficiency": { "unit": "samples/kWh", "value": 67104.4524464939 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2830.88896, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8607311080000102, 0.6582591099999888 ], "count": 2, "total": 1.518990217999999, "mean": 0.7594951089999995, "p50": 0.7594951089999995, "p90": 0.8404839082000081, "p95": 0.8506075081000091, "p99": 0.85870638802001, "stdev": 0.10123599900001068, "stdev_": 13.329381295595773 }, "throughput": { "unit": "samples/s", "value": 5.266656694164442 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2830.88896, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6595438680000143, 0.6438743109999905, 0.6589589189999856 ], "count": 3, "total": 1.9623770979999904, "mean": 0.6541256993333301, "p50": 0.6589589189999856, "p90": 0.6594268782000086, "p95": 0.6594853731000114, "p99": 0.6595321690200138, "stdev": 0.007252758712097429, "stdev_": 1.1087714057847406 }, "throughput": { "unit": "samples/s", "value": 9.172548955216195 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2830.88896, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8607311080000102, 0.6582591099999888, 0.6595438680000143, 0.6438743109999905, 0.6589589189999856 ], "count": 5, "total": 3.4813673159999894, "mean": 0.6962734631999978, "p50": 0.6589589189999856, "p90": 0.7802562120000118, "p95": 0.820493660000011, "p99": 0.8526836184000104, "stdev": 0.08243605704553186, "stdev_": 11.839609205650985 }, "throughput": { "unit": "samples/s", "value": 14.362173095095535 }, "energy": { "unit": "kWh", "cpu": 0.00014304225701111133, "ram": 0.000005979151187898341, "gpu": 0, "total": 0.00014902140819900966 }, "efficiency": { "unit": "samples/kWh", "value": 67104.4524464939 } }
{ "memory": { "unit": "MB", "max_ram": 2830.88896, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8607311080000102, 0.6582591099999888 ], "count": 2, "total": 1.518990217999999, "mean": 0.7594951089999995, "p50": 0.7594951089999995, "p90": 0.8404839082000081, "p95": 0.8506075081000091, "p99": 0.85870638802001, "stdev": 0.10123599900001068, "stdev_": 13.329381295595773 }, "throughput": { "unit": "samples/s", "value": 5.266656694164442 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2830.88896, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6595438680000143, 0.6438743109999905, 0.6589589189999856 ], "count": 3, "total": 1.9623770979999904, "mean": 0.6541256993333301, "p50": 0.6589589189999856, "p90": 0.6594268782000086, "p95": 0.6594853731000114, "p99": 0.6595321690200138, "stdev": 0.007252758712097429, "stdev_": 1.1087714057847406 }, "throughput": { "unit": "samples/s", "value": 9.172548955216195 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "openai-community/gpt2", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 3.1791685380000274, "mean": 0.6358337076000055, "stdev": 0.07846941233493662, "p50": 0.596941285000014, "p90": 0.7161873142000047, "p95": 0.7544328206000045, "p99": 0.7850292257200044, "values": [ 0.7926783270000044, 0.6014507950000052, 0.596941285000014, 0.593667699000008, 0.5944304319999958 ] }, "throughput": { "unit": "samples/s", "value": 15.727382616668173 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.3941291220000096, "mean": 0.6970645610000048, "stdev": 0.0956137659999996, "p50": 0.6970645610000048, "p90": 0.7735555738000045, "p95": 0.7831169504000044, "p99": 0.7907660516800044, "values": [ 0.7926783270000044, 0.6014507950000052 ] }, "throughput": { "unit": "samples/s", "value": 5.738349392288174 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.7850394160000178, "mean": 0.5950131386666726, "stdev": 0.0013985114990352936, "p50": 0.5944304319999958, "p90": 0.5964391144000103, "p95": 0.5966901997000121, "p99": 0.5968910679400136, "values": [ 0.596941285000014, 0.593667699000008, 0.5944304319999958 ] }, "throughput": { "unit": "samples/s", "value": 10.083810944822195 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 4330.35264, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.3504065380000156, 1.1831454829999757, 1.158507051000015, 1.1611668830000212, 1.182815015000017 ], "count": 5, "total": 6.036040970000045, "mean": 1.207208194000009, "p50": 1.182815015000017, "p90": 1.2835021159999997, "p95": 1.3169543270000077, "p99": 1.343716095800014, "stdev": 0.07234833877131454, "stdev_": 5.9930291337397925 }, "throughput": { "unit": "samples/s", "value": 8.28357531840935 }, "energy": { "unit": "kWh", "cpu": 0.00024321854826111117, "ram": 0.000010166816000650465, "gpu": 0, "total": 0.0002533853642617616 }, "efficiency": { "unit": "samples/kWh", "value": 39465.5785630516 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 4330.35264, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.3504065380000156, 1.1831454829999757 ], "count": 2, "total": 2.5335520209999913, "mean": 1.2667760104999957, "p50": 1.2667760104999957, "p90": 1.3336804325000116, "p95": 1.3420434852500136, "p99": 1.3487339274500152, "stdev": 0.08363052750001998, "stdev_": 6.601840168019211 }, "throughput": { "unit": "samples/s", "value": 3.1576221580176616 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 4330.35264, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.158507051000015, 1.1611668830000212, 1.182815015000017 ], "count": 3, "total": 3.5024889490000533, "mean": 1.1674963163333512, "p50": 1.1611668830000212, "p90": 1.1784853886000177, "p95": 1.1806502018000173, "p99": 1.182382052360017, "stdev": 0.010886247385184279, "stdev_": 0.9324438315466141 }, "throughput": { "unit": "samples/s", "value": 5.139202510585773 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.5.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.342208, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1025-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "78351930eda4599a64ff2da35e08ab39722c146a", "transformers_version": "4.46.3", "transformers_commit": null, "accelerate_version": "1.1.1", "accelerate_commit": null, "diffusers_version": "0.31.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.11", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 4330.35264, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.3504065380000156, 1.1831454829999757, 1.158507051000015, 1.1611668830000212, 1.182815015000017 ], "count": 5, "total": 6.036040970000045, "mean": 1.207208194000009, "p50": 1.182815015000017, "p90": 1.2835021159999997, "p95": 1.3169543270000077, "p99": 1.343716095800014, "stdev": 0.07234833877131454, "stdev_": 5.9930291337397925 }, "throughput": { "unit": "samples/s", "value": 8.28357531840935 }, "energy": { "unit": "kWh", "cpu": 0.00024321854826111117, "ram": 0.000010166816000650465, "gpu": 0, "total": 0.0002533853642617616 }, "efficiency": { "unit": "samples/kWh", "value": 39465.5785630516 } }
{ "memory": { "unit": "MB", "max_ram": 4330.35264, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.3504065380000156, 1.1831454829999757 ], "count": 2, "total": 2.5335520209999913, "mean": 1.2667760104999957, "p50": 1.2667760104999957, "p90": 1.3336804325000116, "p95": 1.3420434852500136, "p99": 1.3487339274500152, "stdev": 0.08363052750001998, "stdev_": 6.601840168019211 }, "throughput": { "unit": "samples/s", "value": 3.1576221580176616 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 4330.35264, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.158507051000015, 1.1611668830000212, 1.182815015000017 ], "count": 3, "total": 3.5024889490000533, "mean": 1.1674963163333512, "p50": 1.1611668830000212, "p90": 1.1784853886000177, "p95": 1.1806502018000173, "p99": 1.182382052360017, "stdev": 0.010886247385184279, "stdev_": 0.9324438315466141 }, "throughput": { "unit": "samples/s", "value": 5.139202510585773 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "model": "microsoft/deberta-v3-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 5.583659903000068, "mean": 1.1167319806000138, "stdev": 0.07853258776030796, "p50": 1.0731834320000075, "p90": 1.2025418994000006, "p95": 1.2374836681999908, "p99": 1.265437083239983, "values": [ 1.2724254369999812, 1.0731834320000075, 1.0977165930000297, 1.0711078369999996, 1.0692266040000504 ] }, "throughput": { "unit": "samples/s", "value": 8.954700119385008 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 2.3456088689999888, "mean": 1.1728044344999944, "stdev": 0.09962100249998684, "p50": 1.1728044344999944, "p90": 1.252501236499984, "p95": 1.2624633367499825, "p99": 1.2704330169499816, "values": [ 1.2724254369999812, 1.0731834320000075 ] }, "throughput": { "unit": "samples/s", "value": 3.4106283045436583 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 3.2380510340000797, "mean": 1.0793503446666932, "stdev": 0.01300958794585394, "p50": 1.0711078369999996, "p90": 1.0923948418000236, "p95": 1.0950557174000266, "p99": 1.097184417880029, "values": [ 1.0977165930000297, 1.0711078369999996, 1.0692266040000504 ] }, "throughput": { "unit": "samples/s", "value": 5.558899415419021 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
695