Update README.md
Browse files
README.md
CHANGED
@@ -60,7 +60,7 @@ We evaluate the model on the datasets of [Open LLM Leaderboard](https://huggingf
|
|
60 |
### Inference Tool
|
61 |
|
62 |
|
63 |
-
We utilize [PowerInfer](https://github.com/SJTU-IPADS/PowerInfer) for
|
64 |
|
65 |
Dense Inference: 0.85 tokens/s
|
66 |
|
|
|
60 |
### Inference Tool
|
61 |
|
62 |
|
63 |
+
We utilize [PowerInfer](https://github.com/SJTU-IPADS/PowerInfer) for inference, here we present the inference speeds of pure CPU-based inference with fp16 precision. The CPU configuration includes an Intel i9-13900K processor (eight performance cores at 5.4GHz) and 192GB of host memory (with a memory bandwidth of 67.2 GB/s).
|
64 |
|
65 |
Dense Inference: 0.85 tokens/s
|
66 |
|