Deci
/

Text Generation
Transformers
Safetensors
English
Deci AI
DeciLM
custom_code
Eval Results
OferB commited on
Commit
953a4bb
1 Parent(s): 18f9b3d

Update hf_benchmark_example.py

Browse files
Files changed (1) hide show
  1. hf_benchmark_example.py +2 -2
hf_benchmark_example.py CHANGED
@@ -4,10 +4,10 @@ You need a file called "sample.txt" (default path) with text to take tokens for
4
  You can use our attached "sample.txt" file with one of Deci's blogs as a prompt.
5
 
6
  # Run this and record tokens per second (652 tokens per second on A10 for DeciLM-6b)
7
- python time_hf.py --model Deci/DeciLM-6b
8
 
9
  # Run this and record tokens per second (136 tokens per second on A10 for meta-llama/Llama-2-7b-hf), CUDA OOM above batch size 8
10
- python time_hf.py --model meta-llama/Llama-2-7b-hf --batch_size 8
11
  """
12
 
13
  import json
 
4
  You can use our attached "sample.txt" file with one of Deci's blogs as a prompt.
5
 
6
  # Run this and record tokens per second (652 tokens per second on A10 for DeciLM-6b)
7
+ python hf_benchmark_example.py --model Deci/DeciLM-6b
8
 
9
  # Run this and record tokens per second (136 tokens per second on A10 for meta-llama/Llama-2-7b-hf), CUDA OOM above batch size 8
10
+ python hf_benchmark_example.py --model meta-llama/Llama-2-7b-hf --batch_size 8
11
  """
12
 
13
  import json