Code trying to download model from huggingface instead of using Locally Downloaded Model

#41
by sharedJackpot - opened

Hi all,

When we are using locally downloaded nvidia/NV-Embed-v1 model like this in our local workstation it is running fine. But when same script is used in other server, the code is trying to download the model instead of using local model.

Model downloaded from HuggingFace as:

git lfs install
git clone https://huggingface.co/nvidia/NV-Embed-v1
(# When prompted for a password, used an access token with write permissions.)

Script in workstation

from sentence_transformers import SentenceTransformer
# Load the local NV-Embed-v1 model using sentence-transformers with trust_remote_code
model_path = "/home/pc1/path/to/Download/NV-Embed-v1"
model = SentenceTransformer(model_path, device='cpu', trust_remote_code=True)

Downloaded the model in other server with same auth_token and process as above

For same above script in that different server, getting this error:

/home/path/to/lib/python3.8/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Traceback (most recent call last):
  File "/home/path/to/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
    response.raise_for_status()
  File "/home/path/to/lib/python3.8/site-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/nvidia/NV-Embed-v1/resolve/main/config.json

Both local workstation and server has same version of:
sentence-transformers 2.7.0
huggingface-hub==0.23.0

Python:
Workstation: Python 3.10.12
Server: Python 3.8.10

What is going wrong in server case? Any idea/help pls.

Issue is still unresolved.

Any idea/help pls.

NVIDIA org

Hi, @sharedJackpot . Thanks for asking question and sorry for the delayed response. To save the model locally, we suggest to do in the following way:

import torch
from sentence_transformers import SentenceTransformer

## save the model
model = SentenceTransformer('nvidia/NV-Embed-v1', trust_remote_code=True)
model.max_seq_length = 4096
model.tokenizer.padding_side="right"
model = model.to(torch.float16)
model.save("<your_local_directory>")  ## change path of <your_local_directory>

## load the model
model = SentenceTransformer("<your_local_directory>")

@nada5 Thank you for your reply and sharing the code snippet.

As, mentioned earlier, we were to download the model but not able to use it.

However, the issue is now resolved now.

Placing my token in the file:

~/.cache/huggingface/token

worked for me.

Thank you.

For those encountering the same issue, you can modify the two instances of "'nvidia/NV-Embed-v1'" in the config.json to "{your_local_directory}", and it should then be able to load the model locally without problems.

Sign up or log in to comment