Leon commited on
Commit
96204e7
1 Parent(s): 230cb2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: image-text-to-text
8
  ---
9
 
10
  # Aria-sequential_mlp-bnb_nf4
11
- BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and runs on a RTX 3090 and RTX 4060 Ti 16 GB.
12
  Currently the model is not 5 GB sharded, as this seems to [cause problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
13
 
14
  ### Installation
 
8
  ---
9
 
10
  # Aria-sequential_mlp-bnb_nf4
11
+ BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 15.5 GB of VRAM and runs on a RTX 3090 and (not really practical, only without `device_map=auto`) on a RTX 4060 Ti 16 GB.
12
  Currently the model is not 5 GB sharded, as this seems to [cause problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
13
 
14
  ### Installation