ValueError: Loading failed at code trust_remote_code = resolve_trust_remote_code due to trust_remote_code
In which config file we need to set trust_remote_code=True ?
Process SpawnProcess-1:
Traceback (most recent call last):
File "/usr/lib64/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib64/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/vllm/lib64/python3.11/site-packages/vllm/entrypoints/openai/rpc/server.py", line 230, in run_rpc_server
server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/entrypoints/openai/rpc/server.py", line 31, in init
self.engine = AsyncLLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 740, in from_engine_args
engine = cls(
^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 636, in init
self.engine = self._init_engine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 840, in _init_engine
return engine_class(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 272, in init
super().init(*args, **kwargs)
File "/opt/vllm/lib64/python3.11/site-packages/vllm/engine/llm_engine.py", line 247, in init
self.tokenizer = self._init_tokenizer()
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/engine/llm_engine.py", line 521, in _init_tokenizer
return init_tokenizer_from_configs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/transformers_utils/tokenizer_group/init.py", line 28, in init_tokenizer_from_configs
return get_tokenizer_group(parallel_config.tokenizer_pool_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/transformers_utils/tokenizer_group/init.py", line 49, in get_tokenizer_group
return tokenizer_cls.from_config(tokenizer_pool_config, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 30, in from_config
return cls(**init_kwargs)
^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/transformers_utils/tokenizer_group/tokenizer_group.py", line 23, in init
self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/vllm/transformers_utils/tokenizer.py", line 122, in get_tokenizer
raise e
File "/opt/vllm/lib64/python3.11/site-packages/vllm/transformers_utils/tokenizer.py", line 103, in get_tokenizer
tokenizer = AutoTokenizer.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 879, in from_pretrained
trust_remote_code = resolve_trust_remote_code(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/vllm/lib64/python3.11/site-packages/transformers/dynamic_module_utils.py", line 678, in resolve_trust_remote_code
raise ValueError(
ValueError: Loading /mnt/models requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True
to remove this error.
Same issue here, I tried to deploy the model via huggingface's Inference Endpoints.
Hey,
Can you please provide more information? What code are you running?
You should run the AutoTokenizer and the AutoModel with trust_remote_code=True
tokenizer = AutoTokenizer.from_pretrained("openGPT-X/Teuken-7B-instruct-research-v0.4", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("openGPT-X/Teuken-7B-instruct-research-v0.4", trust_remote_code=True)
You should be able to give trust_remote_code=True
when instantiating the vllm.LLM
, i.e., model = vllm.LLM(..., trust_remote_code=True)
. It will propagate to the instantiation of the tokenizer as well, which is the only part that has remote code.
Alternatively if you're using the vLLM CLI, supply the --trust-remote-code
argument.
Thank you for the input and time. Really appreciate. I was able to solve it and missed the notification earlier. So close of ticket is fine.