KeyError:'mistral"' while finetuning mistral-7B-v0.1 in aws sagemaker

#141
by Tecena - opened

Hi,
I am using this https://github.com/huggingface/notebooks/blob/main/sagemaker/24_train_bloom_peft_lora/sagemaker-notebook.ipynb script to finetune the model. The training job is getting stopped due to this error

/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py in getitem(self, key)
732 return self._extra_content[key]
733 if key not in self._mapping:
734 raise KeyError(key)
735 value = self._mapping[key]
736 module_name = model_type_to_module_name(key)
KeyError: 'mistral'

I did search solutions and tried few
I tried to upgrade the transformers library, downloaded the transformers library from source, in the model config.json file I modified the model_type to "llama" instead of "mistral".
But still facing the same issue
I have been stuck at this issue from last 3 days, unable to finetune the model, my work is paused because of this issue.
Please suggest how do I solve this issue

is there any solution for this error, I'm getting the same error too. Below is my code snippet

Code

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

base_model_id = "mistralai/Mistral-7B-v0.1"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4"
)
model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config)

Error

KeyError Traceback (most recent call last)
Cell In[7], line 12
6 base_model_id = "mistralai/Mistral-7B-v0.1"
7 bnb_config = BitsAndBytesConfig(
8 load_in_4bit=True,
9 bnb_4bit_use_double_quant=True,
10 bnb_4bit_quant_type="nf4"
11 )
---> 12 model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config)

File ~/my_projects/mistral_finetune/mistral_env/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:456, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
453 if kwargs.get("torch_dtype", None) == "auto":
454 _ = kwargs.pop("torch_dtype")
--> 456 config, kwargs = AutoConfig.from_pretrained(
457 pretrained_model_name_or_path,
458 return_unused_kwargs=True,
459 trust_remote_code=trust_remote_code,
460 **hub_kwargs,
461 **kwargs,
462 )
464 # if torch_dtype=auto was passed here, ensure to pass it on
465 if kwargs_orig.get("torch_dtype", None) == "auto":

File ~/my_projects/mistral_finetune/mistral_env/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
...
--> 671 raise KeyError(key)
672 value = self._mapping[key]
673 module_name = model_type_to_module_name(key)

KeyError: 'mistral'

Hi the issue is solved for me by updating transformer library to 4.38.0.

Sign up or log in to comment