Error does not appear to have a file named jinaai/jina-bert-implementation--configuration_bert.py

#4
by StarryNi - opened

when i use model by sentence_transformers, the same thing happened with the en model, OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like jinaai/jina-bert-implementation is not the path to a directory containing a file named configuration_bert.py.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

But using Transformers can load and infer.
package version:
sentence_transformers: 2.2.2
transformers: 4.16.2

i fixed it by updating sentence_transformers vers to 2.3.0. sry to interupt

StarryNi changed discussion status to closed

In my case, I have to do development on a server without internet connection, so I need to download the model locally first and then upload it to the server.

Currently, I have downloaded all the files from jinaai/jina-embeddings-v2-base-zh, but it seems the configuration_bert.py file is missing.

Of course, I can manually download this file from jinaai/jina-bert-implementation, but I don't know which path I should put it in.

My usage:

import sentence_transformers

print(sentence_transformers.__version__)
>>> 3.0.0
from sentence_transformers import SentenceTransformer

bi_encoder = SentenceTransformer('
    /path/to/the/folder/contains/files/from/jina-embeddings-v2-base-zh', 
    trust_remote_code=True
)

>>> OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like jinaai/jina-bert-implementation is not the path to a directory containing a file named configuration_bert.py.

@JHH11

You should update your local config.json and update:

  "auto_map": {
    "AutoConfig": "jinaai/jina-bert-implementation--configuration_bert.JinaBertConfig",
    "AutoModel": "jinaai/jina-bert-implementation--modeling_bert.JinaBertModel",
    "AutoModelForMaskedLM": "jinaai/jina-bert-implementation--modeling_bert.JinaBertForMaskedLM",
    "AutoModelForQuestionAnswering": "jinaai/jina-bert-implementation--modeling_bert.JinaBertForQuestionAnswering",
    "AutoModelForSequenceClassification": "jinaai/jina-bert-implementation--modeling_bert.JinaBertForSequenceClassification",
    "AutoModelForTokenClassification": "jinaai/jina-bert-implementation--modeling_bert.JinaBertForTokenClassification"
  },

to

  "auto_map": {
    "AutoConfig": "configuration_bert.JinaBertConfig",
    "AutoModel": "modeling_bert.JinaBertModel",
    "AutoModelForMaskedLM": "modeling_bert.JinaBertForMaskedLM",
    "AutoModelForQuestionAnswering": "modeling_bert.JinaBertForQuestionAnswering",
    "AutoModelForSequenceClassification": "modeling_bert.JinaBertForSequenceClassification",
    "AutoModelForTokenClassification": "modeling_bert.JinaBertForTokenClassification"
  },

And then you can copy the configuration_bert and modeling_bert files from https://huggingface.co/jinaai/jina-bert-implementation into the same directory as the downloaded model. Then you should be able to load the model with:

from sentence_transformers import SentenceTransformer

bi_encoder = SentenceTransformer('/path/to/the/folder/contains/files/from/jina-embeddings-v2-base-zh')
  • Tom Aarsen

@tomaarsen

Thanks a lot for your help, Tom! Your explanation really cleared things up for me. I appreciate it! 🤗🤗🤗

Sign up or log in to comment