Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 76 new columns ({'mean_input_tokens', 'results_end_to_end_latency_s_min', 'results_request_output_throughput_token_per_s_quantiles_p90', 'stddev_output_tokens', 'results_request_output_throughput_token_per_s_stddev', 'results_number_output_tokens_mean', 'mean_output_tokens', 'results_number_input_tokens_stddev', 'results_num_requests_started', 'results_ttft_s_quantiles_p50', 'results_number_output_tokens_min', 'results_number_input_tokens_quantiles_p75', 'results_number_input_tokens_max', 'results_number_output_tokens_quantiles_p50', 'results_inter_token_latency_s_min', 'results_end_to_end_latency_s_quantiles_p90', 'results_end_to_end_latency_s_max', 'results_number_input_tokens_quantiles_p50', 'results_number_output_tokens_quantiles_p99', 'results_ttft_s_stddev', 'results_inter_token_latency_s_stddev', 'results_number_input_tokens_mean', 'results_end_to_end_latency_s_stddev', 'results_inter_token_latency_s_quantiles_p75', 'results_number_output_tokens_quantiles_p90', 'results_ttft_s_max', 'results_end_to_end_latency_s_quantiles_p50', 'model', 'results_inter_token_latency_s_mean', 'results_ttft_s_quantiles_p75', 'results_end_to_end_latency_s_quantiles_p75', 'results_error_code_frequency', 'results_inter_token_latency_s_quantiles_p25', 'results_end_to_end_latency_s_quantiles_p95', 'results_request_output_throughput_token_per_s_quantiles_p25', 'stddev_input_tokens', 'results_end_to_end_latency_s_mean', 'results_number_input_tokens_quantiles_p99', 'results_error_rate', 'results_number_output_tokens_quantiles_p25', 'results_number_output_tokens_quantiles_p75', 'results_inter_token_latency_s_quantiles_p99', 'results_number_output_tokens_quantiles_p95', 'results_number_output_tokens_stddev', 'results_inter_token_latency_s_quantiles_p50', 'results_request_output_throughput_token_per_s_mean', 'version', 'results_request_output_throughput_token_per_s_quantiles_p99', 'results_number_output_tokens_max', 'results_request_output_throughput_token_per_s_max', 'results_end_to_end_latency_s_quantiles_p99', 'timestamp', 'num_concurrent_requests', 'results_number_errors', 'results_num_completed_requests', 'results_number_input_tokens_quantiles_p90', 'results_request_output_throughput_token_per_s_quantiles_p75', 'results_end_to_end_latency_s_quantiles_p25', 'results_ttft_s_min', 'results_request_output_throughput_token_per_s_quantiles_p95', 'results_inter_token_latency_s_quantiles_p90', 'results_mean_output_throughput_token_per_s', 'results_ttft_s_quantiles_p95', 'results_request_output_throughput_token_per_s_quantiles_p50', 'results_ttft_s_mean', 'results_ttft_s_quantiles_p25', 'results_number_input_tokens_min', 'results_num_completed_requests_per_min', 'results_ttft_s_quantiles_p90', 'results_ttft_s_quantiles_p99', 'results_number_input_tokens_quantiles_p25', 'results_inter_token_latency_s_quantiles_p95', 'results_inter_token_latency_s_max', 'results_request_output_throughput_token_per_s_min', 'results_number_input_tokens_quantiles_p95', 'name'}) and 9 missing columns ({'error_msg', 'request_output_throughput_token_per_s', 'inter_token_latency_s', 'number_total_tokens', 'end_to_end_latency_s', 'error_code', 'number_input_tokens', 'number_output_tokens', 'ttft_s'}).

This happened while the json dataset builder was generating data using

hf://datasets/ssong1/llmperf-bedrock/raw_data/bedrock-anthropic-claude-instant-v1_1024_0_1024_100_summary.json (at revision 820b8e769eb61c207909825cbf4a4e58f3914a1d)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              version: timestamp[s]
              name: string
              model: string
              mean_input_tokens: int64
              stddev_input_tokens: int64
              mean_output_tokens: int64
              stddev_output_tokens: int64
              num_concurrent_requests: int64
              results_inter_token_latency_s_quantiles_p25: double
              results_inter_token_latency_s_quantiles_p50: double
              results_inter_token_latency_s_quantiles_p75: double
              results_inter_token_latency_s_quantiles_p90: double
              results_inter_token_latency_s_quantiles_p95: double
              results_inter_token_latency_s_quantiles_p99: double
              results_inter_token_latency_s_mean: double
              results_inter_token_latency_s_min: double
              results_inter_token_latency_s_max: double
              results_inter_token_latency_s_stddev: double
              results_ttft_s_quantiles_p25: double
              results_ttft_s_quantiles_p50: double
              results_ttft_s_quantiles_p75: double
              results_ttft_s_quantiles_p90: double
              results_ttft_s_quantiles_p95: double
              results_ttft_s_quantiles_p99: double
              results_ttft_s_mean: double
              results_ttft_s_min: double
              results_ttft_s_max: double
              results_ttft_s_stddev: double
              results_end_to_end_latency_s_quantiles_p25: double
              results_end_to_end_latency_s_quantiles_p50: double
              results_end_to_end_latency_s_quantiles_p75: double
              results_end_to_end_latency_s_quantiles_p90: double
              results_end_to_end_latency_s_quantiles_p95: double
              results_end_to_end_latency_s_quantiles_p99: double
              results_end_to_end_latency_s_mean: double
              results_end_to_end_latency_s_min: double
              results_end_to_end_latency_s_max: double
              results_end_to_end_latency_s_stddev: double
              results_request_output
              ...
              throughput_token_per_s_quantiles_p99: double
              results_request_output_throughput_token_per_s_mean: double
              results_request_output_throughput_token_per_s_min: double
              results_request_output_throughput_token_per_s_max: double
              results_request_output_throughput_token_per_s_stddev: double
              results_number_input_tokens_quantiles_p25: double
              results_number_input_tokens_quantiles_p50: double
              results_number_input_tokens_quantiles_p75: double
              results_number_input_tokens_quantiles_p90: double
              results_number_input_tokens_quantiles_p95: double
              results_number_input_tokens_quantiles_p99: double
              results_number_input_tokens_mean: double
              results_number_input_tokens_min: string
              results_number_input_tokens_max: string
              results_number_input_tokens_stddev: double
              results_number_output_tokens_quantiles_p25: double
              results_number_output_tokens_quantiles_p50: double
              results_number_output_tokens_quantiles_p75: double
              results_number_output_tokens_quantiles_p90: double
              results_number_output_tokens_quantiles_p95: double
              results_number_output_tokens_quantiles_p99: double
              results_number_output_tokens_mean: double
              results_number_output_tokens_min: string
              results_number_output_tokens_max: string
              results_number_output_tokens_stddev: double
              results_num_requests_started: int64
              results_error_rate: double
              results_number_errors: int64
              results_error_code_frequency: string
              results_mean_output_throughput_token_per_s: double
              results_num_completed_requests: int64
              results_num_completed_requests_per_min: double
              timestamp: int64
              to
              {'error_msg': Value(dtype='string', id=None), 'request_output_throughput_token_per_s': Value(dtype='float64', id=None), 'inter_token_latency_s': Value(dtype='float64', id=None), 'number_total_tokens': Value(dtype='int64', id=None), 'end_to_end_latency_s': Value(dtype='float64', id=None), 'error_code': Value(dtype='null', id=None), 'number_input_tokens': Value(dtype='int64', id=None), 'number_output_tokens': Value(dtype='int64', id=None), 'ttft_s': Value(dtype='float64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 76 new columns ({'mean_input_tokens', 'results_end_to_end_latency_s_min', 'results_request_output_throughput_token_per_s_quantiles_p90', 'stddev_output_tokens', 'results_request_output_throughput_token_per_s_stddev', 'results_number_output_tokens_mean', 'mean_output_tokens', 'results_number_input_tokens_stddev', 'results_num_requests_started', 'results_ttft_s_quantiles_p50', 'results_number_output_tokens_min', 'results_number_input_tokens_quantiles_p75', 'results_number_input_tokens_max', 'results_number_output_tokens_quantiles_p50', 'results_inter_token_latency_s_min', 'results_end_to_end_latency_s_quantiles_p90', 'results_end_to_end_latency_s_max', 'results_number_input_tokens_quantiles_p50', 'results_number_output_tokens_quantiles_p99', 'results_ttft_s_stddev', 'results_inter_token_latency_s_stddev', 'results_number_input_tokens_mean', 'results_end_to_end_latency_s_stddev', 'results_inter_token_latency_s_quantiles_p75', 'results_number_output_tokens_quantiles_p90', 'results_ttft_s_max', 'results_end_to_end_latency_s_quantiles_p50', 'model', 'results_inter_token_latency_s_mean', 'results_ttft_s_quantiles_p75', 'results_end_to_end_latency_s_quantiles_p75', 'results_error_code_frequency', 'results_inter_token_latency_s_quantiles_p25', 'results_end_to_end_latency_s_quantiles_p95', 'results_request_output_throughput_token_per_s_quantiles_p25', 'stddev_input_tokens', 'results_end_to_end_latency_s_mean', 'results_number_input_tokens_quantiles_p99', 'results_error_rate', 'results_number_output_tokens_quantiles_p25', 'results_number_output_tokens_quantiles_p75', 'results_inter_token_latency_s_quantiles_p99', 'results_number_output_tokens_quantiles_p95', 'results_number_output_tokens_stddev', 'results_inter_token_latency_s_quantiles_p50', 'results_request_output_throughput_token_per_s_mean', 'version', 'results_request_output_throughput_token_per_s_quantiles_p99', 'results_number_output_tokens_max', 'results_request_output_throughput_token_per_s_max', 'results_end_to_end_latency_s_quantiles_p99', 'timestamp', 'num_concurrent_requests', 'results_number_errors', 'results_num_completed_requests', 'results_number_input_tokens_quantiles_p90', 'results_request_output_throughput_token_per_s_quantiles_p75', 'results_end_to_end_latency_s_quantiles_p25', 'results_ttft_s_min', 'results_request_output_throughput_token_per_s_quantiles_p95', 'results_inter_token_latency_s_quantiles_p90', 'results_mean_output_throughput_token_per_s', 'results_ttft_s_quantiles_p95', 'results_request_output_throughput_token_per_s_quantiles_p50', 'results_ttft_s_mean', 'results_ttft_s_quantiles_p25', 'results_number_input_tokens_min', 'results_num_completed_requests_per_min', 'results_ttft_s_quantiles_p90', 'results_ttft_s_quantiles_p99', 'results_number_input_tokens_quantiles_p25', 'results_inter_token_latency_s_quantiles_p95', 'results_inter_token_latency_s_max', 'results_request_output_throughput_token_per_s_min', 'results_number_input_tokens_quantiles_p95', 'name'}) and 9 missing columns ({'error_msg', 'request_output_throughput_token_per_s', 'inter_token_latency_s', 'number_total_tokens', 'end_to_end_latency_s', 'error_code', 'number_input_tokens', 'number_output_tokens', 'ttft_s'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/ssong1/llmperf-bedrock/raw_data/bedrock-anthropic-claude-instant-v1_1024_0_1024_100_summary.json (at revision 820b8e769eb61c207909825cbf4a4e58f3914a1d)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Open a discussion for direct support.

request_output_throughput_token_per_s
float64
inter_token_latency_s
float64
error_msg
string
error_code
null
end_to_end_latency_s
float64
number_input_tokens
int64
number_output_tokens
int64
ttft_s
float64
number_total_tokens
int64
80.709977
0.01101
null
9.366872
1,024
850
1.296926
1,874
85.792044
0.009911
null
7.529836
1,024
759
1.16885
1,783
32.07669
0.024916
null
1.65229
1,024
66
1.24543
1,090
53.063796
0.016905
null
12.060954
1,024
713
1.222925
1,737
74.019346
0.012365
null
7.06572
1,024
571
1.172058
1,595
76.539988
0.010764
null
3.958715
1,024
367
1.181337
1,391
59.497714
0.014155
null
11.445818
1,024
808
1.183751
1,832
77.100792
0.01087
null
4.280112
1,024
393
1.160499
1,417
69.68627
0.011976
null
6.859314
1,024
572
1.179717
1,596
63.723908
0.013348
null
5.947532
1,024
445
1.215837
1,469
72.890522
0.011173
null
5.460243
1,024
488
1.232071
1,512
85.407083
0.00978
null
4.460988
1,024
455
1.26758
1,479
89.814971
0.009159
null
4.642879
1,024
506
1.174789
1,530
53.886441
0.016375
null
3.544491
1,024
216
1.220313
1,240
70.811874
0.012504
null
8.473155
1,024
677
1.283001
1,701
65.432664
0.012743
null
11.492731
1,024
901
1.324202
1,925
37.786261
0.023529
null
20.457171
1,024
869
1.219398
1,893
30.015202
0.029718
null
23.854579
1,024
802
2.155391
1,826
16.055926
0.05334
null
10.83712
1,024
203
1.889337
1,227
16.528309
0.050152
null
16.456614
1,024
328
1.460065
1,352
92.596693
0.009434
null
8.423627
1,024
892
1.588464
1,916
62.948711
0.013339
null
7.879431
1,024
590
1.163141
1,614
37.374297
0.022191
null
4.735875
1,024
213
2.199076
1,237
53.875938
0.015837
null
10.301445
1,024
650
1.179561
1,674
103.315625
0.008574
null
6.688243
1,024
779
1.303697
1,803
68.865003
0.01272
null
5.184056
1,024
407
1.225452
1,431
75.572458
0.011215
null
3.149295
1,024
279
1.173232
1,303
85.355607
0.01032
null
10.637848
1,024
1,030
1.157395
2,054
99.610734
0.008874
null
6.876769
1,024
774
1.168326
1,798
69.199754
0.011711
null
5.043371
1,024
430
1.255332
1,454
59.626878
0.014051
null
9.391738
1,024
668
1.220992
1,692
58.009859
0.014669
null
9.291524
1,024
633
1.153024
1,657
100.220617
0.008426
null
6.126484
1,024
726
1.183953
1,750
65.022949
0.013012
null
12.903137
1,024
991
1.181941
2,015
55.902366
0.016034
null
7.155332
1,024
445
1.236226
1,469
64.519107
0.013969
null
4.463794
1,024
318
1.152611
1,342
65.747195
0.013013
null
8.07639
1,024
619
1.166912
1,643
98.187761
0.008479
null
6.222771
1,024
733
1.152345
1,757
67.901878
0.012692
null
3.357786
1,024
264
1.230695
1,288
65.649574
0.012607
null
5.620752
1,024
445
1.93973
1,469
60.156785
0.014265
null
12.533914
1,024
878
1.153848
1,902
61.867365
0.014002
null
5.915882
1,024
422
1.156762
1,446
62.90672
0.01359
null
11.954208
1,024
879
1.273736
1,903
77.796138
0.010843
null
3.804816
1,024
349
1.231282
1,373
69.927269
0.011941
null
4.047062
1,024
338
1.353932
1,362
55.536745
0.01492
null
5.095725
1,024
341
1.175492
1,365
66.107427
0.012792
null
14.385676
1,024
1,124
1.129887
2,148
71.554572
0.011574
null
4.891372
1,024
422
1.282573
1,446
52.935421
0.015506
null
3.513715
1,024
226
1.186333
1,250
69.544767
0.012529
null
9.404015
1,024
750
1.242136
1,774
74.763363
0.011162
null
4.775066
1,024
427
1.244265
1,451
36.920591
0.02496
null
2.952282
1,024
118
1.441923
1,142
57.297283
0.015504
null
4.816982
1,024
310
1.166936
1,334
60.2727
0.013196
null
2.754149
1,024
208
1.185945
1,232
47.745166
0.017544
null
5.78069
1,024
329
1.84757
1,353
65.366222
0.012924
null
3.059684
1,024
236
1.214574
1,260
110.382373
0.008095
null
7.084464
1,024
874
1.182321
1,898
null
null
null
null
null
null
null
null
null
59.605989
0.013723
null
4.261317
957
310
1.54734
1,267
35.023098
0.026349
null
1.798813
1,090
68
1.601813
1,158
54.495916
0.015222
null
4.23885
1,145
278
1.25157
1,423
71.464665
0.012198
null
9.291305
1,131
761
1.282382
1,892
49.206911
0.017683
null
1.971268
947
111
1.248526
1,058
66.824149
0.012658
null
3.905774
944
308
1.207053
1,252
57.286718
0.014215
null
3.264282
1,116
229
1.182101
1,345
50.743723
0.016372
null
3.52753
944
215
1.174207
1,159
45.518452
0.018593
null
9.490657
1,100
510
1.188139
1,610
25.526443
0.034759
null
2.232979
1,073
64
1.490863
1,137
62.937897
0.013654
null
3.749728
874
274
1.318778
1,148
103.93937
0.008574
null
8.351022
1,160
973
1.300075
2,133
85.488814
0.009869
null
6.035877
961
611
1.362104
1,572
72.095836
0.011846
null
3.217939
1,038
271
1.236247
1,309
41.693718
0.021209
null
4.485088
850
211
1.381482
1,061
82.141598
0.010176
null
5.015729
850
492
1.178206
1,342
61.75684
0.013694
null
13.71508
1,217
1,001
1.391986
2,218
44.225399
0.019092
null
4.228339
903
221
1.549759
1,124
59.100547
0.014513
null
6.886569
1,150
473
2.033308
1,623
99.594705
0.008915
null
6.606777
938
740
1.240329
1,678
73.217555
0.011393
null
3.264244
1,083
286
1.281553
1,369
63.240005
0.013096
null
14.911447
896
1,138
1.325655
2,034
35.929462
0.024576
null
3.423374
879
139
1.208694
1,018
68.864612
0.012065
null
7.405836
994
613
1.392854
1,607
72.953275
0.011191
null
3.399436
978
303
1.193455
1,281
17.776435
0.052378
null
1.631373
1,002
31
1.350369
1,033
79.185617
0.011379
null
12.628556
1,026
1,109
1.180356
2,135
81.140707
0.010344
null
5.200842
1,150
502
1.551933
1,652
73.38689
0.010882
null
3.120448
990
286
1.155602
1,276
72.092687
0.011572
null
3.273564
982
282
1.200175
1,264
70.341317
0.012095
null
3.696263
998
305
1.239015
1,303
75.982194
0.011066
null
3.329727
1,082
300
1.161832
1,382
44.242763
0.018871
null
2.893129
1,242
153
1.341739
1,395
43.5751
0.018947
null
4.268493
1,069
225
2.580754
1,294
63.75876
0.013142
null
7.747955
923
588
1.184635
1,511
48.014866
0.017654
null
8.851425
946
501
1.289251
1,447
71.606931
0.011418
null
3.421456
930
298
1.164875
1,228
67.756211
0.012469
null
6.080623
953
487
1.285378
1,440
69.975344
0.012014
null
4.95889
906
412
1.218765
1,318
43.3116
0.020465
null
12.144553
1,051
593
1.287809
1,644
68.703792
0.012574
null
5.137999
1,041
408
1.286154
1,449
47.219235
0.018963
null
15.86218
1,162
836
1.229736
1,998
End of preview.

Utilizing the LLMPerf, we have benchmarked a selection of LLM inference providers. Our analysis focuses on evaluating their performance, reliability, and efficiency under the following key metrics:

  • Output tokens throughput, which represents the average number of output tokens returned per second. This metric is important for applications that require high throughput, such as summarization and translation, and easy to compare across different models and providers.
  • Time to first token (TTFT), which represents the duration of time that LLM returns the first token. TTFT is especially important for streaming applications, such as chatbots.

Time to First Token (seconds)

For streaming applications, the TTFT is how long before the LLM returns the first token.

Framework Model Median Mean Min Max P25 P75 P95 P99
bedrock claude-instant-v1 1.21 1.29 1.12 2.19 1.17 1.27 1.89 2.17

Output Tokens Throughput (tokens/s)

The output tokens throughput is measured as the average number of output tokens returned per second. We collect results by sending 100 requests to each LLM inference provider, and calculate the mean output tokens throughput based on 100 requests. A higher output tokens throughput indicates a higher throughput of the LLM inference provider.

Framework Model Median Mean Min Max P25 P75 P95 P99
bedrock claude-instant-v1 65.64 65.98 16.05 110.38 57.29 75.57 99.73 106.42

Run Configurations

testscript token_benchmark_ray.py

For each provider, we perform:

  • Total number of requests: 100
  • Concurrency: 1
  • Prompt's token length: 1024
  • Expected output length: 1024
  • Tested models: claude-instant-v1-100k
python token_benchmark_ray.py \
    --model bedrock/anthropic.claude-instant-v1 \
    --mean-input-tokens 1024 \
    --stddev-input-tokens 0 \
    --mean-output-tokens 1024 \
    --stddev-output-tokens 100 \
    --max-num-completed-requests 100 \
    --num-concurrent-requests 1 \
    --llm-api litellm 

We ran the LLMPerf clients from an on-premise Kubernetes Bastion host. The results were up-to-date of January 19, 2023, 3pm KST. You could find the detailed results in the raw_data folder.

Caveats and Disclaimers

  • The endpoints provider backend might vary widely, so this is not a reflection on how the software runs on a particular hardware.
  • The results may vary with time of day.
  • The results (e.g. measurement of TTFT) depend on client location, and can also be biased by some providers lagging on the first token in order to increase ITL.
  • The results is only a proxy of the system capabilities and is also impacted by the existing system load and provider traffic.
  • The results may not correlate with users’ workloads.
Downloads last month
0
Edit dataset card