Getting error while trying to infer in huggingface

#5
by jerinmathews - opened

Error: DownloadError
2024/01/02 17:05:42 ~ {"timestamp":"2024-01-03T01:05:42.420564Z","level":"ERROR","fields":{"message":"Download encountered an error: Traceback (most recent call last):\n\n File "/opt/conda/lib/python3.10/site-packages/peft/utils/config.py", line 117, in from_pretrained\n config_file = hf_hub_download(\n\n File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn\n validate_repo_id(arg_value)\n\n File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 164, in validate_repo_id\n raise

HFValidationError(\n\nhuggingface_hub.utils.validators.HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '/repository'.\n\n\nDuring handling of the above exception, another exception occurred:\n\n\nTraceback (most recent call last):\n\n File "/opt/conda/bin/text-generation-server", line 8, in \n sys.exit(app())\n\n File "/opt/conda/lib/python3.10/site-packages/text_generation_server/cli.py", line 204, in download_weights\n utils.download_and_unload_peft(\n\n File "/opt/conda/lib/python3.10/site-packages/text_generation_server/utils/peft.py", line 24, in download_and_unload_peft\n model = AutoPeftModelForSeq2SeqLM.from_pretrained(\n\n File "/opt/conda/lib/python3.10/site-packages/peft/auto.py", line 69, in from_pretrained\n peft_config = PeftConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)\n\n File "/opt/conda/lib/python3.10/site-packages/peft/utils/config.py", line 121, in from_pretrained\n raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")\n\nValueError: Can't find 'adapter_config.json' at '/repository'\n\n"},"target":"text_generation_launcher","span":{"name":"download"},"spans":[{"name":"download"}]}

selfrag org

Hi! I've never tried to load the model from PEFT, so I am not sure why this happens... If possible, I recommend loading the model with vllm, which provide a much faster inference!

Sign up or log in to comment