Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -367,6 +367,18 @@ with torch.no_grad():
|
|
367 |
print(scores)
|
368 |
```
|
369 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
370 |
## Evaluation
|
371 |
|
372 |
The configuration of saving 60% Flops is: `compress_ratios=2`, `compress_layer=[8]`, `cutoff_layers=[25]`.
|
|
|
367 |
print(scores)
|
368 |
```
|
369 |
|
370 |
+
## Load model in local
|
371 |
+
|
372 |
+
1. make sure `gemma_config.py` and `gemma_model.py` from [BAAI/bge-reranker-v2.5-gemma2-lightweight](https://huggingface.co/BAAI/bge-reranker-v2.5-gemma2-lightweight/tree/main) in your local path.
|
373 |
+
2. modify the following part of config.json:
|
374 |
+
```
|
375 |
+
"auto_map": {
|
376 |
+
"AutoConfig": "gemma_config.CostWiseGemmaConfig",
|
377 |
+
"AutoModel": "gemma_model.CostWiseGemmaModel",
|
378 |
+
"AutoModelForCausalLM": "gemma_model.CostWiseGemmaForCausalLM"
|
379 |
+
},
|
380 |
+
```
|
381 |
+
|
382 |
## Evaluation
|
383 |
|
384 |
The configuration of saving 60% Flops is: `compress_ratios=2`, `compress_layer=[8]`, `cutoff_layers=[25]`.
|