Unable to Access Mistral-7B-Instruct-v0.2 Model
I have been using the mistralai/Mistral-7B-Instruct-v0.2 model for a case study, and it was functioning perfectly until recently. However, I am now encountering an issue where I am unable to access the model. The error message I receive is as follows:
Model not loaded on the server: https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.2. Please retry with a higher timeout (current: 120).
HF Team, Could you please provide any suggestions or alternatives so that I can access this model again?
Yes, I am also getting the same issue. I tried with llm = HuggingFaceEndpoint(timeout=400, repo_id="mistralai/Mistral-7B-Instruct-v0.2") but no luck
Same. I am getting the following error, although I have been authenticated using huggingface-cli login
raise ModelNotFoundError(
mlx_lm.utils.ModelNotFoundError: Model not found for path or HF repo: mistralai/Mistral-7B-Instruct-v0.2.
Please make sure you specified the local path or Hugging Face repo id correctly.
If you are trying to access a private or gated Hugging Face repo, make sure you are authenticated:
https://huggingface.co/docs/huggingface_hub/en/guides/cli#huggingface-cli-login
I am getting a similar issue.
InferenceTimeoutError: Model not loaded on the server: https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.2. Please retry with a higher timeout (current: 120)
What can we do to escalate this issue ?