bitsandbytes-cuda111==0.26.0 not found

#4
by tomwjhtom - opened

In Colab, pip install bitsandbytes-cuda111==0.26.0 fails with error message

Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
ERROR: Could not find a version that satisfies the requirement bitsandbytes-cuda111==0.26.0 (from versions: 0.26.0.post2)
ERROR: No matching distribution found for bitsandbytes-cuda111==0.26.0

nothing to worry for it , just use pip install bitsandbytes and everything will work as usual . But there is error in loading the model , this is from last 3 days

Is this the error you are referring to?
Running this block below

gpt = GPTJForCausalLM.from_pretrained("hivemind/gpt-j-6B-8bit", low_cpu_mem_usage=True)

device = 'cuda' if torch.cuda.is_available() else 'cpu'
gpt.to(device)

leads to this error message

Downloading config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.00k/1.00k [00:00<00:00, 420kB/s]
Downloading pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5.75G/5.75G [01:52<00:00, 54.9MB/s]
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
/home/junhao/play/finetune_gpt_j_6B_8bit.ipynb Cell 11 in <cell line: 1>()
----> 1 gpt = GPTJForCausalLM.from_pretrained("hivemind/gpt-j-6B-8bit", low_cpu_mem_usage=True)
      3 device = 'cuda' if torch.cuda.is_available() else 'cpu'
      4 gpt.to(device)

File ~/play/.env_gptj/lib/python3.8/site-packages/transformers/modeling_utils.py:2110, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
   2108     init_contexts = [deepspeed.zero.Init(config_dict_or_path=deepspeed_config())] + init_contexts
   2109 elif low_cpu_mem_usage:
-> 2110     init_contexts.append(init_empty_weights())
   2112 with ContextManagers(init_contexts):
   2113     model = cls(config, *model_args, **model_kwargs)

NameError: name 'init_empty_weights' is not defined
hivemind org

This specific error is fixed by installing accelerate
It tries to import from accelerate, and if it's not installed, throws this error( reference)

However, please note that our code was superceded by the load_in_8bit=True feature in transformers
by Younes Belkada and Tim Dettmers. Please see this usage example.
This legacy model was built for transformers v4.15.0 and pytorch 1.11. Newer versions could work, but are not supported.

@justheuristic If I understand this correctly, this work to use an 8bit quantized model can be done just by passing the load_in_8bit=True in transformers load model pipeline when calling the Eluether/gpt-j-6b model, right?

Sign up or log in to comment