runtime error

<?, ?B/s] tokenizer_config.json: 100%|██████████| 7.34k/7.34k [00:00<00:00, 32.8MB/s] vocab.json: 0%| | 0.00/798k [00:00<?, ?B/s] vocab.json: 100%|██████████| 798k/798k [00:00<00:00, 59.2MB/s] merges.txt: 0%| | 0.00/456k [00:00<?, ?B/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 42.1MB/s] tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 19.2MB/s] added_tokens.json: 0%| | 0.00/1.08k [00:00<?, ?B/s] added_tokens.json: 100%|██████████| 1.08k/1.08k [00:00<00:00, 6.32MB/s] special_tokens_map.json: 0%| | 0.00/99.0 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 99.0/99.0 [00:00<00:00, 567kB/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Traceback (most recent call last): File "/home/user/app/app.py", line 33, in <module> model.load_adapter(finetuned_model_path) File "/home/user/.local/lib/python3.10/site-packages/transformers/integrations/peft.py", line 187, in load_adapter inject_adapter_in_model(peft_config, self, adapter_name) File "/home/user/.local/lib/python3.10/site-packages/peft/mapping.py", line 163, in inject_adapter_in_model peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name) File "/home/user/.local/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 111, in __init__ super().__init__(model, config, adapter_name) File "/home/user/.local/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 90, in __init__ self.inject_adapter(self.model, adapter_name) File "/home/user/.local/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 250, in inject_adapter raise ValueError( ValueError: Target modules {'Wqkv'} not found in the base model. Please check the target modules and try again.

Container logs:

Fetching error logs...