Image-Text-to-Text
Japanese
English

Avoid Deprecated Arguments in customized_mini_gtp4: Update for Future Compatibility

#1
by panco1 - opened

When executing customized_mini_gpt4.py, users may encounter a warning log if the code path enters the if self.low_resource: branch. The warning message states: “The load_in_4bit and load_in_8bit arguments are deprecated and will be removed in the future versions. Please, pass a BitsAndBytesConfig object in quantization_config argument instead.”

To prevent this warning, consider the following approach:

if self.low_resource:
    quantization_config = BitsAndBytesConfig(load_in_8bit=True)
    self.gpt_neox_model = CustomizedGPTNeoXForCausalLM.from_pretrained(
        gpt_neox_model,
        torch_dtype=torch.float16,
        quantization_config=quantization_config,
        device_map={'': device_8bit}
    )

Sign up or log in to comment