runtime error
43.3-py3-none-manylinux_2_24_x86_64.whl (137.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 137.5/137.5 MB 33.0 MB/s eta 0:00:00 Installing collected packages: bitsandbytes, xformers, trl, peft, accelerate Successfully installed accelerate-0.33.0 bitsandbytes-0.43.3 peft-0.12.0 trl-0.8.6 xformers-0.0.26.post1 [notice] A new release of pip available: 22.3.1 -> 24.2 [notice] To update, run: pip install --upgrade pip Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. /usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py:159: UserWarning: You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be used. warnings.warn(warning_msg) Traceback (most recent call last): File "/home/user/app/app.py", line 15, in <module> model = AutoPeftModelForCausalLM.from_pretrained(model_name, quantization_config=quantization_config).to(device) File "/usr/local/lib/python3.10/site-packages/peft/auto.py", line 106, in from_pretrained base_model = target_class.from_pretrained(base_model_path, revision=base_model_revision, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3165, in from_pretrained hf_*********.validate_environment( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 62, in validate_environment raise ImportError( ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`
Container logs:
Fetching error logs...