Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    DatasetGenerationError
Message:      An error occurred while generating the dataset
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1016, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1016, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 1869, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1324, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 938, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
overall
dict
warmup
dict
train
dict
{ "name": "cuda_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1091.817472, "max_global_vram": 3072.851968, "max_process_vram": 0, "max_reserved": 2426.404864, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 5, "total": 0.9614978790283203, "mean": 0.19229957580566406, "stdev": 0.24877770459224804, "p50": 0.06784095764160156, "p90": 0.4417623626708985, "p95": 0.5658068405151366, "p99": 0.6650424227905273, "values": [ 0.689851318359375, 0.0696289291381836, 0.06744268798828125, 0.06784095764160156, 0.06673398590087891 ] }, "throughput": { "unit": "samples/s", "value": 52.00219479478153 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1091.817472, "max_global_vram": 3072.851968, "max_process_vram": 0, "max_reserved": 2426.404864, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 2, "total": 0.7594802474975586, "mean": 0.3797401237487793, "stdev": 0.3101111946105957, "p50": 0.3797401237487793, "p90": 0.6278290794372559, "p95": 0.6588401988983154, "p99": 0.683649094467163, "values": [ 0.689851318359375, 0.0696289291381836 ] }, "throughput": { "unit": "samples/s", "value": 10.533519504107598 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1091.817472, "max_global_vram": 3072.851968, "max_process_vram": 0, "max_reserved": 2426.404864, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 3, "total": 0.20201763153076174, "mean": 0.06733921051025392, "stdev": 0.00045780439784825653, "p50": 0.06744268798828125, "p90": 0.0677613037109375, "p95": 0.06780113067626953, "p99": 0.06783299224853515, "values": [ 0.06744268798828125, 0.06784095764160156, 0.06673398590087891 ] }, "throughput": { "unit": "samples/s", "value": 89.10113371594049 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1091.817472, "max_global_vram": 3072.851968, "max_process_vram": 0, "max_reserved": 2426.404864, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 5, "total": 0.9614978790283203, "mean": 0.19229957580566406, "stdev": 0.24877770459224804, "p50": 0.06784095764160156, "p90": 0.4417623626708985, "p95": 0.5658068405151366, "p99": 0.6650424227905273, "values": [ 0.689851318359375, 0.0696289291381836, 0.06744268798828125, 0.06784095764160156, 0.06673398590087891 ] }, "throughput": { "unit": "samples/s", "value": 52.00219479478153 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1091.817472, "max_global_vram": 3072.851968, "max_process_vram": 0, "max_reserved": 2426.404864, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 2, "total": 0.7594802474975586, "mean": 0.3797401237487793, "stdev": 0.3101111946105957, "p50": 0.3797401237487793, "p90": 0.6278290794372559, "p95": 0.6588401988983154, "p99": 0.683649094467163, "values": [ 0.689851318359375, 0.0696289291381836 ] }, "throughput": { "unit": "samples/s", "value": 10.533519504107598 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1091.817472, "max_global_vram": 3072.851968, "max_process_vram": 0, "max_reserved": 2426.404864, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 3, "total": 0.20201763153076174, "mean": 0.06733921051025392, "stdev": 0.00045780439784825653, "p50": 0.06744268798828125, "p90": 0.0677613037109375, "p95": 0.06780113067626953, "p99": 0.06783299224853515, "values": [ 0.06744268798828125, 0.06784095764160156, 0.06673398590087891 ] }, "throughput": { "unit": "samples/s", "value": 89.10113371594049 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "model": "google-bert/bert-base-uncased", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1063.8336, "max_global_vram": 3169.32096, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 5, "total": 0.7448790740966797, "mean": 0.14897581481933594, "stdev": 0.2054173633207176, "p50": 0.04632883071899414, "p90": 0.35471870422363283, "p95": 0.4572641067504882, "p99": 0.5393004287719726, "values": [ 0.5598095092773437, 0.04708249664306641, 0.04632883071899414, 0.04576665496826172, 0.04589158248901367 ] }, "throughput": { "unit": "samples/s", "value": 67.1249894630687 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1063.8336, "max_global_vram": 3169.32096, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 2, "total": 0.6068920059204101, "mean": 0.3034460029602051, "stdev": 0.25636350631713867, "p50": 0.3034460029602051, "p90": 0.508536808013916, "p95": 0.5341731586456299, "p99": 0.554682239151001, "values": [ 0.5598095092773437, 0.04708249664306641 ] }, "throughput": { "unit": "samples/s", "value": 13.181916917602548 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1063.8336, "max_global_vram": 3169.32096, "max_process_vram": 0, "max_reserved": 2520.776704, "max_allocated": 2211.86048 }, "latency": { "unit": "s", "count": 3, "total": 0.13798706817626955, "mean": 0.045995689392089846, "stdev": 0.00024102431292157434, "p50": 0.04589158248901367, "p90": 0.04624138107299805, "p95": 0.0462851058959961, "p99": 0.04632008575439454, "values": [ 0.04632883071899414, 0.04576665496826172, 0.04589158248901367 ] }, "throughput": { "unit": "samples/s", "value": 130.4470066499722 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1461.39136, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "count": 5, "total": 0.564491268157959, "mean": 0.11289825363159181, "stdev": 0.11227861232262278, "p50": 0.056784896850585936, "p90": 0.22532137298583987, "p95": 0.28138824081420893, "p99": 0.3262417350769043, "values": [ 0.3374551086425781, 0.05712076950073242, 0.056586238861083986, 0.05654425430297851, 0.056784896850585936 ] }, "throughput": { "unit": "samples/s", "value": 88.57532936365763 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1461.39136, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "count": 2, "total": 0.3945758781433105, "mean": 0.19728793907165526, "stdev": 0.14016716957092284, "p50": 0.19728793907165526, "p90": 0.30942167472839355, "p95": 0.3234383916854858, "p99": 0.33465176525115964, "values": [ 0.3374551086425781, 0.05712076950073242 ] }, "throughput": { "unit": "samples/s", "value": 20.27493428550234 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1461.39136, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "count": 3, "total": 0.16991539001464845, "mean": 0.05663846333821615, "stdev": 0.00010495318301840867, "p50": 0.056586238861083986, "p90": 0.056745165252685546, "p95": 0.05676503105163574, "p99": 0.0567809236907959, "values": [ 0.056586238861083986, 0.05654425430297851, 0.056784896850585936 ] }, "throughput": { "unit": "samples/s", "value": 105.93507744323934 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1461.39136, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "count": 5, "total": 0.564491268157959, "mean": 0.11289825363159181, "stdev": 0.11227861232262278, "p50": 0.056784896850585936, "p90": 0.22532137298583987, "p95": 0.28138824081420893, "p99": 0.3262417350769043, "values": [ 0.3374551086425781, 0.05712076950073242, 0.056586238861083986, 0.05654425430297851, 0.056784896850585936 ] }, "throughput": { "unit": "samples/s", "value": 88.57532936365763 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1461.39136, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "count": 2, "total": 0.3945758781433105, "mean": 0.19728793907165526, "stdev": 0.14016716957092284, "p50": 0.19728793907165526, "p90": 0.30942167472839355, "p95": 0.3234383916854858, "p99": 0.33465176525115964, "values": [ 0.3374551086425781, 0.05712076950073242 ] }, "throughput": { "unit": "samples/s", "value": 20.27493428550234 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1461.39136, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1755.291648 }, "latency": { "unit": "s", "count": 3, "total": 0.16991539001464845, "mean": 0.05663846333821615, "stdev": 0.00010495318301840867, "p50": 0.056586238861083986, "p90": 0.056745165252685546, "p95": 0.05676503105163574, "p99": 0.0567809236907959, "values": [ 0.056586238861083986, 0.05654425430297851, 0.056784896850585936 ] }, "throughput": { "unit": "samples/s", "value": 105.93507744323934 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "model": "google/vit-base-patch16-224", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1446.354944, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1756.126208 }, "latency": { "unit": "s", "count": 5, "total": 0.48406525039672854, "mean": 0.09681305007934571, "stdev": 0.1110534407118009, "p50": 0.04146995162963867, "p90": 0.20794796142578126, "p95": 0.26343380432128904, "p99": 0.3078224786376953, "values": [ 0.3189196472167969, 0.04103168106079102, 0.041490432739257815, 0.04146995162963867, 0.04115353775024414 ] }, "throughput": { "unit": "samples/s", "value": 103.29185984538483 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1446.354944, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1756.126208 }, "latency": { "unit": "s", "count": 2, "total": 0.3599513282775879, "mean": 0.17997566413879396, "stdev": 0.13894398307800293, "p50": 0.17997566413879396, "p90": 0.2911308506011963, "p95": 0.30502524890899657, "p99": 0.31614076755523685, "values": [ 0.3189196472167969, 0.04103168106079102 ] }, "throughput": { "unit": "samples/s", "value": 22.225227055782792 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1446.354944, "max_global_vram": 2628.255744, "max_process_vram": 0, "max_reserved": 1956.642816, "max_allocated": 1756.126208 }, "latency": { "unit": "s", "count": 3, "total": 0.12411392211914063, "mean": 0.041371307373046874, "stdev": 0.00015421321911462263, "p50": 0.04146995162963867, "p90": 0.04148633651733399, "p95": 0.0414883846282959, "p99": 0.041490023117065435, "values": [ 0.041490432739257815, 0.04146995162963867, 0.04115353775024414 ] }, "throughput": { "unit": "samples/s", "value": 145.02804917180256 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1106.747392, "max_global_vram": 3376.939008, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 5, "total": 1.1056629943847656, "mean": 0.2211325988769531, "stdev": 0.28761936387226733, "p50": 0.07699967956542969, "p90": 0.5092378723144532, "p95": 0.6528039031982421, "p99": 0.7676567279052734, "values": [ 0.7963699340820313, 0.07697510528564454, 0.07853977966308594, 0.07699967956542969, 0.07677849578857422 ] }, "throughput": { "unit": "samples/s", "value": 45.22173596650214 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1106.747392, "max_global_vram": 3376.939008, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 2, "total": 0.8733450393676758, "mean": 0.4366725196838379, "stdev": 0.35969741439819336, "p50": 0.4366725196838379, "p90": 0.7244304512023926, "p95": 0.7604001926422119, "p99": 0.7891759857940674, "values": [ 0.7963699340820313, 0.07697510528564454 ] }, "throughput": { "unit": "samples/s", "value": 9.160182561742385 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1106.747392, "max_global_vram": 3376.939008, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 3, "total": 0.23231795501708985, "mean": 0.07743931833902995, "stdev": 0.0007833653511584085, "p50": 0.07699967956542969, "p90": 0.0782317596435547, "p95": 0.07838576965332031, "p99": 0.07850897766113281, "values": [ 0.07853977966308594, 0.07699967956542969, 0.07677849578857422 ] }, "throughput": { "unit": "samples/s", "value": 77.48002085622645 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1106.747392, "max_global_vram": 3376.939008, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 5, "total": 1.1056629943847656, "mean": 0.2211325988769531, "stdev": 0.28761936387226733, "p50": 0.07699967956542969, "p90": 0.5092378723144532, "p95": 0.6528039031982421, "p99": 0.7676567279052734, "values": [ 0.7963699340820313, 0.07697510528564454, 0.07853977966308594, 0.07699967956542969, 0.07677849578857422 ] }, "throughput": { "unit": "samples/s", "value": 45.22173596650214 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1106.747392, "max_global_vram": 3376.939008, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 2, "total": 0.8733450393676758, "mean": 0.4366725196838379, "stdev": 0.35969741439819336, "p50": 0.4366725196838379, "p90": 0.7244304512023926, "p95": 0.7604001926422119, "p99": 0.7891759857940674, "values": [ 0.7963699340820313, 0.07697510528564454 ] }, "throughput": { "unit": "samples/s", "value": 9.160182561742385 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1106.747392, "max_global_vram": 3376.939008, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 3, "total": 0.23231795501708985, "mean": 0.07743931833902995, "stdev": 0.0007833653511584085, "p50": 0.07699967956542969, "p90": 0.0782317596435547, "p95": 0.07838576965332031, "p99": 0.07850897766113281, "values": [ 0.07853977966308594, 0.07699967956542969, 0.07677849578857422 ] }, "throughput": { "unit": "samples/s", "value": 77.48002085622645 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1093.496832, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 5, "total": 0.8026234703063965, "mean": 0.16052469406127928, "stdev": 0.22240891148008993, "p50": 0.04907724761962891, "p90": 0.38326721343994147, "p95": 0.49430444412231433, "p99": 0.5831342286682129, "values": [ 0.6053416748046875, 0.05015552139282226, 0.04897484970092773, 0.04907417678833008, 0.04907724761962891 ] }, "throughput": { "unit": "samples/s", "value": 62.295711313940046 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1093.496832, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 2, "total": 0.6554971961975098, "mean": 0.3277485980987549, "stdev": 0.27759307670593264, "p50": 0.3277485980987549, "p90": 0.549823059463501, "p95": 0.5775823671340942, "p99": 0.5997898132705688, "values": [ 0.6053416748046875, 0.05015552139282226 ] }, "throughput": { "unit": "samples/s", "value": 12.204476306546239 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1093.496832, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.23424 }, "latency": { "unit": "s", "count": 3, "total": 0.14712627410888673, "mean": 0.049042091369628914, "stdev": 0.000047563564546161045, "p50": 0.04907417678833008, "p90": 0.04907663345336914, "p95": 0.049076940536499025, "p99": 0.04907718620300293, "values": [ 0.04897484970092773, 0.04907417678833008, 0.04907724761962891 ] }, "throughput": { "unit": "samples/s", "value": 122.34388527149387 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1095.077888, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 5, "total": 1.094946807861328, "mean": 0.21898936157226562, "stdev": 0.28533450851903186, "p50": 0.07649485015869141, "p90": 0.5046370269775391, "p95": 0.6471473098754882, "p99": 0.7611555361938477, "values": [ 0.7896575927734375, 0.07649485015869141, 0.0771061782836914, 0.07594290924072265, 0.07574527740478515 ] }, "throughput": { "unit": "samples/s", "value": 45.664318705728725 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1095.077888, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 2, "total": 0.8661524429321289, "mean": 0.43307622146606445, "stdev": 0.35658137130737305, "p50": 0.43307622146606445, "p90": 0.7183413185119629, "p95": 0.7539994556427002, "p99": 0.78252596534729, "values": [ 0.7896575927734375, 0.07649485015869141 ] }, "throughput": { "unit": "samples/s", "value": 9.236249421543079 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1095.077888, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 3, "total": 0.2287943649291992, "mean": 0.07626478830973306, "stdev": 0.000600398424299626, "p50": 0.07594290924072265, "p90": 0.07687352447509765, "p95": 0.07698985137939453, "p99": 0.07708291290283202, "values": [ 0.0771061782836914, 0.07594290924072265, 0.07574527740478515 ] }, "throughput": { "unit": "samples/s", "value": 78.67326629993764 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1095.077888, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 5, "total": 1.094946807861328, "mean": 0.21898936157226562, "stdev": 0.28533450851903186, "p50": 0.07649485015869141, "p90": 0.5046370269775391, "p95": 0.6471473098754882, "p99": 0.7611555361938477, "values": [ 0.7896575927734375, 0.07649485015869141, 0.0771061782836914, 0.07594290924072265, 0.07574527740478515 ] }, "throughput": { "unit": "samples/s", "value": 45.664318705728725 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1095.077888, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 2, "total": 0.8661524429321289, "mean": 0.43307622146606445, "stdev": 0.35658137130737305, "p50": 0.43307622146606445, "p90": 0.7183413185119629, "p95": 0.7539994556427002, "p99": 0.78252596534729, "values": [ 0.7896575927734375, 0.07649485015869141 ] }, "throughput": { "unit": "samples/s", "value": 9.236249421543079 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1095.077888, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 3, "total": 0.2287943649291992, "mean": 0.07626478830973306, "stdev": 0.000600398424299626, "p50": 0.07594290924072265, "p90": 0.07687352447509765, "p95": 0.07698985137939453, "p99": 0.07708291290283202, "values": [ 0.0771061782836914, 0.07594290924072265, 0.07574527740478515 ] }, "throughput": { "unit": "samples/s", "value": 78.67326629993764 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1080.1152, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 5, "total": 0.778464241027832, "mean": 0.1556928482055664, "stdev": 0.21129125718980005, "p50": 0.05020159912109375, "p90": 0.36730080566406254, "p95": 0.4727875488281249, "p99": 0.557176943359375, "values": [ 0.5782742919921875, 0.050840576171875, 0.04969574356079102, 0.049452030181884765, 0.05020159912109375 ] }, "throughput": { "unit": "samples/s", "value": 64.22902602948513 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1080.1152, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 2, "total": 0.6291148681640625, "mean": 0.31455743408203124, "stdev": 0.26371685791015625, "p50": 0.31455743408203124, "p90": 0.5255309204101563, "p95": 0.5519026062011718, "p99": 0.5729999548339844, "values": [ 0.5782742919921875, 0.050840576171875 ] }, "throughput": { "unit": "samples/s", "value": 12.716278703357135 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1080.1152, "max_global_vram": 3379.03616, "max_process_vram": 0, "max_reserved": 2730.491904, "max_allocated": 2516.250112 }, "latency": { "unit": "s", "count": 3, "total": 0.14934937286376954, "mean": 0.049783124287923176, "stdev": 0.0003121857804388603, "p50": 0.04969574356079102, "p90": 0.05010042800903321, "p95": 0.050151013565063476, "p99": 0.0501914820098877, "values": [ 0.04969574356079102, 0.049452030181884765, 0.05020159912109375 ] }, "throughput": { "unit": "samples/s", "value": 120.52276922795568 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1123.81952, "max_global_vram": 3406.299136, "max_process_vram": 0, "max_reserved": 2759.852032, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 5, "total": 1.1130573043823244, "mean": 0.22261146087646488, "stdev": 0.2966186784115287, "p50": 0.07342182159423828, "p90": 0.5207038085937501, "p95": 0.6682699951171873, "p99": 0.7863229443359374, "values": [ 0.815836181640625, 0.0780052490234375, 0.07342182159423828, 0.07328665924072265, 0.07250739288330078 ] }, "throughput": { "unit": "samples/s", "value": 44.92131699162318 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1123.81952, "max_global_vram": 3406.299136, "max_process_vram": 0, "max_reserved": 2759.852032, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 2, "total": 0.8938414306640625, "mean": 0.44692071533203126, "stdev": 0.36891546630859373, "p50": 0.44692071533203126, "p90": 0.7420530883789063, "p95": 0.7789446350097656, "p99": 0.8084578723144531, "values": [ 0.815836181640625, 0.0780052490234375 ] }, "throughput": { "unit": "samples/s", "value": 8.950133352015863 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1123.81952, "max_global_vram": 3406.299136, "max_process_vram": 0, "max_reserved": 2759.852032, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 3, "total": 0.21921587371826173, "mean": 0.07307195790608724, "stdev": 0.0004030032788678646, "p50": 0.07328665924072265, "p90": 0.07339478912353516, "p95": 0.07340830535888672, "p99": 0.07341911834716797, "values": [ 0.07342182159423828, 0.07328665924072265, 0.07250739288330078 ] }, "throughput": { "unit": "samples/s", "value": 82.11084213332911 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1123.81952, "max_global_vram": 3406.299136, "max_process_vram": 0, "max_reserved": 2759.852032, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 5, "total": 1.1130573043823244, "mean": 0.22261146087646488, "stdev": 0.2966186784115287, "p50": 0.07342182159423828, "p90": 0.5207038085937501, "p95": 0.6682699951171873, "p99": 0.7863229443359374, "values": [ 0.815836181640625, 0.0780052490234375, 0.07342182159423828, 0.07328665924072265, 0.07250739288330078 ] }, "throughput": { "unit": "samples/s", "value": 44.92131699162318 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1123.81952, "max_global_vram": 3406.299136, "max_process_vram": 0, "max_reserved": 2759.852032, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 2, "total": 0.8938414306640625, "mean": 0.44692071533203126, "stdev": 0.36891546630859373, "p50": 0.44692071533203126, "p90": 0.7420530883789063, "p95": 0.7789446350097656, "p99": 0.8084578723144531, "values": [ 0.815836181640625, 0.0780052490234375 ] }, "throughput": { "unit": "samples/s", "value": 8.950133352015863 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1123.81952, "max_global_vram": 3406.299136, "max_process_vram": 0, "max_reserved": 2759.852032, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 3, "total": 0.21921587371826173, "mean": 0.07307195790608724, "stdev": 0.0004030032788678646, "p50": 0.07328665924072265, "p90": 0.07339478912353516, "p95": 0.07340830535888672, "p99": 0.07341911834716797, "values": [ 0.07342182159423828, 0.07328665924072265, 0.07250739288330078 ] }, "throughput": { "unit": "samples/s", "value": 82.11084213332911 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "openai-community/gpt2", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1107.173376, "max_global_vram": 3563.585536, "max_process_vram": 0, "max_reserved": 2915.04128, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 5, "total": 0.8139294586181639, "mean": 0.16278589172363278, "stdev": 0.2273662362653359, "p50": 0.04927385711669922, "p90": 0.3902293930053711, "p95": 0.5038737297058105, "p99": 0.594789199066162, "values": [ 0.61751806640625, 0.049296382904052735, 0.04860006332397461, 0.0492410888671875, 0.04927385711669922 ] }, "throughput": { "unit": "samples/s", "value": 61.4303849929289 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1107.173376, "max_global_vram": 3563.585536, "max_process_vram": 0, "max_reserved": 2915.04128, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 2, "total": 0.6668144493103026, "mean": 0.3334072246551513, "stdev": 0.2841108417510986, "p50": 0.3334072246551513, "p90": 0.5606958980560303, "p95": 0.5891069822311401, "p99": 0.6118358495712279, "values": [ 0.61751806640625, 0.049296382904052735 ] }, "throughput": { "unit": "samples/s", "value": 11.99734050195603 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1107.173376, "max_global_vram": 3563.585536, "max_process_vram": 0, "max_reserved": 2915.04128, "max_allocated": 2523.776 }, "latency": { "unit": "s", "count": 3, "total": 0.1471150093078613, "mean": 0.04903833643595377, "stdev": 0.00031019448743967263, "p50": 0.0492410888671875, "p90": 0.04926730346679687, "p95": 0.049270580291748044, "p99": 0.04927320175170898, "values": [ 0.04860006332397461, 0.0492410888671875, 0.04927385711669922 ] }, "throughput": { "unit": "samples/s", "value": 122.35325331307405 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
{ "name": "cuda_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1157.439488, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 5, "total": 1.2183490295410158, "mean": 0.24366980590820314, "stdev": 0.2616822144118372, "p50": 0.11301580810546875, "p90": 0.5056716674804689, "p95": 0.6363524963378905, "p99": 0.7408971594238282, "values": [ 0.7670333251953125, 0.11362918090820312, 0.11245260620117188, 0.11221810913085938, 0.11301580810546875 ] }, "throughput": { "unit": "samples/s", "value": 41.03914296122214 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1157.439488, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 2, "total": 0.8806625061035156, "mean": 0.4403312530517578, "stdev": 0.3267020721435547, "p50": 0.4403312530517578, "p90": 0.7016929107666016, "p95": 0.734363117980957, "p99": 0.7604992837524415, "values": [ 0.7670333251953125, 0.11362918090820312 ] }, "throughput": { "unit": "samples/s", "value": 9.08407016825996 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1157.439488, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 3, "total": 0.3376865234375, "mean": 0.11256217447916667, "stdev": 0.000334748481878829, "p50": 0.11245260620117188, "p90": 0.11290316772460939, "p95": 0.11295948791503907, "p99": 0.11300454406738282, "values": [ 0.11245260620117188, 0.11221810913085938, 0.11301580810546875 ] }, "throughput": { "unit": "samples/s", "value": 53.303874305577644 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
cuda_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.3.0+cu121", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "hub_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.215-203.850.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.12", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.1", "optimum_benchmark_commit": null, "transformers_version": "4.41.1", "transformers_commit": null, "accelerate_version": "0.30.1", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.3", "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 1157.439488, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 5, "total": 1.2183490295410158, "mean": 0.24366980590820314, "stdev": 0.2616822144118372, "p50": 0.11301580810546875, "p90": 0.5056716674804689, "p95": 0.6363524963378905, "p99": 0.7408971594238282, "values": [ 0.7670333251953125, 0.11362918090820312, 0.11245260620117188, 0.11221810913085938, 0.11301580810546875 ] }, "throughput": { "unit": "samples/s", "value": 41.03914296122214 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1157.439488, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 2, "total": 0.8806625061035156, "mean": 0.4403312530517578, "stdev": 0.3267020721435547, "p50": 0.4403312530517578, "p90": 0.7016929107666016, "p95": 0.734363117980957, "p99": 0.7604992837524415, "values": [ 0.7670333251953125, 0.11362918090820312 ] }, "throughput": { "unit": "samples/s", "value": 9.08407016825996 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 1157.439488, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 3, "total": 0.3376865234375, "mean": 0.11256217447916667, "stdev": 0.000334748481878829, "p50": 0.11245260620117188, "p90": 0.11290316772460939, "p95": 0.11295948791503907, "p99": 0.11300454406738282, "values": [ 0.11245260620117188, 0.11221810913085938, 0.11301580810546875 ] }, "throughput": { "unit": "samples/s", "value": 53.303874305577644 }, "energy": null, "efficiency": null }
{ "name": "cuda_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.2.2", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "model": "microsoft/deberta-v3-base", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": true, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7R32", "cpu_count": 16, "cpu_ram_mb": 66697.29792, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.214-202.855.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 1140.764672, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 5, "total": 1.0522736740112304, "mean": 0.21045473480224608, "stdev": 0.2521626361507063, "p50": 0.08427519989013672, "p90": 0.46303764648437507, "p95": 0.5889081359863281, "p99": 0.6896045275878907, "values": [ 0.7147786254882813, 0.08542617797851562, 0.08360550689697266, 0.08418816375732421, 0.08427519989013672 ] }, "throughput": { "unit": "samples/s", "value": 47.51615595342393 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 1140.764672, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 2, "total": 0.8002048034667969, "mean": 0.40010240173339845, "stdev": 0.3146762237548828, "p50": 0.40010240173339845, "p90": 0.6518433807373047, "p95": 0.683311003112793, "p99": 0.7084851010131836, "values": [ 0.7147786254882813, 0.08542617797851562 ] }, "throughput": { "unit": "samples/s", "value": 9.997440611879489 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 1140.764672, "max_global_vram": 4597.481472, "max_process_vram": 0, "max_reserved": 3948.937216, "max_allocated": 3702.95552 }, "latency": { "unit": "s", "count": 3, "total": 0.2520688705444336, "mean": 0.08402295684814454, "stdev": 0.0002973125946472148, "p50": 0.08418816375732421, "p90": 0.08425779266357422, "p95": 0.08426649627685547, "p99": 0.08427345916748047, "values": [ 0.08360550689697266, 0.08418816375732421, 0.08427519989013672 ] }, "throughput": { "unit": "samples/s", "value": 71.40905563278207 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card