ssong1 commited on
Commit
bca1248
1 Parent(s): 60678b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -30,7 +30,7 @@ A higher output tokens throughput indicates a higher throughput of the LLM infer
30
 
31
  testscript [token_benchmark_ray.py](https://github.com/ray-project/llmperf/blob/main/token_benchmark_ray.py)
32
 
33
- ```
34
  For each provider, we perform:
35
  - Total number of requests: 100
36
  - Concurrency: 1
@@ -38,7 +38,8 @@ For each provider, we perform:
38
  - Expected output length: 1024
39
  - Tested models: claude-instant-v1-100k
40
 
41
- python token_benchmark_ray.py \
 
42
  --model bedrock/anthropic.claude-instant-v1 \
43
  --mean-input-tokens 1024 \
44
  --stddev-input-tokens 0 \
 
30
 
31
  testscript [token_benchmark_ray.py](https://github.com/ray-project/llmperf/blob/main/token_benchmark_ray.py)
32
 
33
+
34
  For each provider, we perform:
35
  - Total number of requests: 100
36
  - Concurrency: 1
 
38
  - Expected output length: 1024
39
  - Tested models: claude-instant-v1-100k
40
 
41
+ ```
42
+ python token_benchmark_ray.py \
43
  --model bedrock/anthropic.claude-instant-v1 \
44
  --mean-input-tokens 1024 \
45
  --stddev-input-tokens 0 \