pszemraj commited on
Commit
95fe631
1 Parent(s): 2082c37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -20,12 +20,14 @@ inference: false
20
 
21
  # long-t5-tglobal-xl-8b: 8-bit quantized version
22
 
23
- This is an 8-bit quantized version of the `pszemraj/long-t5-tglobal-xl-16384-book-summary` model, called `long-t5-tglobal-xl-8b`. The model has been compressed using `bitsandbytes` and can be loaded with low memory usage.
24
 
25
  Refer to the [original model](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) for all details about the model architecture and training process. For more information on loading 8-bit models, refer to the `4.28.0` [release information](https://github.com/huggingface/transformers/releases/tag/v4.28.0) and the [example repository](https://huggingface.co/ybelkada/bloom-1b7-8bit).
26
 
27
  - The total size of the model is only ~3.5 GB, much smaller than the original size.
28
  - This allows for low-RAM loading, making it easier to use in memory-limited environments.
 
 
29
 
30
  ## Basic Usage
31
 
 
20
 
21
  # long-t5-tglobal-xl-8b: 8-bit quantized version
22
 
23
+ This is an 8-bit quantized version of the `pszemraj/long-t5-tglobal-xl-16384-book-summary` model, The model has been compressed using `bitsandbytes` and can be loaded with low memory usage.
24
 
25
  Refer to the [original model](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) for all details about the model architecture and training process. For more information on loading 8-bit models, refer to the `4.28.0` [release information](https://github.com/huggingface/transformers/releases/tag/v4.28.0) and the [example repository](https://huggingface.co/ybelkada/bloom-1b7-8bit).
26
 
27
  - The total size of the model is only ~3.5 GB, much smaller than the original size.
28
  - This allows for low-RAM loading, making it easier to use in memory-limited environments.
29
+ - `bitsandbytes` - AFAIK at time of writing - only works on GPU
30
+
31
 
32
  ## Basic Usage
33