Error during HF Inference Endpoint Deployment

#7
by ValentinEthon - opened

Hey there!

First of all: Congrats on your new model! Your results sound amazing, great job!

Trying to deploy the model on AWS (GPU 路 Nvidia Tesla T4 路 4x GPU 路 64 GB), the following error occurs:

2023/12/12 14:45:51 ~ Error: DownloadError
2023/12/12 14:45:51 ~ {"timestamp":"2023-12-12T13:45:51.232043Z","level":"ERROR","fields":{"message":"Download encountered an error: Traceback (most recent call last):\n\n File \"/opt/conda/lib/python3.10/site-packages/peft/utils/config.py\", line 117, in from_pretrained\n config_file = hf_hub_download(\n\n File \"/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 110, in _inner_fn\n validate_repo_id(arg_value)\n\n File \"/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 164, in validate_repo_id\n raise HFValidationError(\n\nhuggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '/repository'.\n\n\nDuring handling of the above exception, another exception occurred:\n\n\nTraceback (most recent call last):\n\n File \"/opt/conda/lib/python3.10/site-packages/text_generation_server/utils/peft.py\", line 16, in download_and_unload_peft\n model = AutoPeftModelForCausalLM.from_pretrained(\n\n File \"/opt/conda/lib/python3.10/site-packages/peft/auto.py\", line 69, in from_pretrained\n peft_config = PeftConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\n\n File \"/opt/conda/lib/python3.10/site-packages/peft/utils/config.py\", line 121, in from_pretrained\n raise ValueError(f\"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'\")\n\nValueError: Can't find 'adapter_config.json' at '/repository'\n\n\nDuring handling of the above exception, another exception occurred:\n\n\nTraceback (most recent call last):\n\n File \"/opt/conda/lib/python3.10/site-packages/peft/utils/config.py\", line 117, in from_pretrained\n config_file = hf_hub_download(\n\n File \"/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 110, in _inner_fn\n validate_repo_id(arg_value)\n\n File \"/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 164, in validate_repo_id\n raise HFValidationError(\n\nhuggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '/repository'.\n\n\nDuring handling of the above exception, another exception occurred:\n\n\nTraceback (most recent call last):\n\n File \"/opt/conda/bin/text-generation-server\", line 8, in <module>\n sys.exit(app())\n\n File \"/opt/conda/lib/python3.10/site-packages/text_generation_server/cli.py\", line 204, in download_weights\n utils.download_and_unload_peft(\n\n File \"/opt/conda/lib/python3.10/site-packages/text_generation_server/utils/peft.py\", line 24, in download_and_unload_peft\n model = AutoPeftModelForSeq2SeqLM.from_pretrained(\n\n File \"/opt/conda/lib/python3.10/site-packages/peft/auto.py\", line 69, in from_pretrained\n peft_config = PeftConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\n\n File \"/opt/conda/lib/python3.10/site-packages/peft/utils/config.py\", line 121, in from_pretrained\n raise ValueError(f\"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'\")\n\nValueError: Can't find 'adapter_config.json' at '/repository'\n\n"},"target":"text_generation_launcher","span":{"name":"download"},"spans":[{"name":"download"}]}

Especially the "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96" part confuses me. It seems to fail to download some PEFT config file.
Has anyone else had this issue?

Thanks in advance

Hi @ValentinEthon !

We took a deeper look here, and it looks like it could be a TGI issue. Here is a related comment: https://github.com/huggingface/text-generation-inference/issues/1283#issuecomment-1841488787.

It appears to be due to a recent commit, perhaps you can try downgrading your TGI docker SHA and trying?

I tried an earlier SHA (ghcr.io/huggingface/text-generation-inference:sha-96a982a), and it seems to work better. Could you see if that solves your issue as well?

Hi @venkat-srinivasan-nexusflow

Thanks for the quick response: Apologies, I was unclear in my post. I'm trying to quickly test it on AWS via the Hugging Face Hosted Inference Endpoints, i.e. the 'deploy' button on this page.
As far as I can tell, I'm able to specify the revision of NexusRaven, but not the one of TGI.

Sign up or log in to comment