Update usage with infinity
#11
by
michaelfeil
- opened
README.md
CHANGED
@@ -295,6 +295,15 @@ with torch.no_grad():
|
|
295 |
print(scores)
|
296 |
```
|
297 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
298 |
## Evaluation
|
299 |
|
300 |
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
|
|
|
295 |
print(scores)
|
296 |
```
|
297 |
|
298 |
+
#### Using infinity
|
299 |
+
|
300 |
+
For a docker-based deployment with infinity:
|
301 |
+
```bash
|
302 |
+
docker run --gpus all -v $PWD/data:/app/.cache -p "7997":"7997" \
|
303 |
+
michaelf34/infinity:0.0.68 \
|
304 |
+
v2 --model-id BAAI/bge-large-zh --revision "main" --dtype float16 --batch-size 32 --engine torch --port 7997
|
305 |
+
```
|
306 |
+
|
307 |
## Evaluation
|
308 |
|
309 |
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
|