Text2Text Generation
Transformers
PyTorch
5 languages
t5
flan-ul2
Inference Endpoints
text-generation-inference
ybelkada HF staff commited on
Commit
ac13457
1 Parent(s): 50f8d3a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -95,7 +95,7 @@ python convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path ~/code/ul2/fla
95
 
96
  ## Running the model
97
 
98
- For more efficient memory usage, we advise you to load the model in `8bit` using `load_in_8bit` flag as follows:
99
 
100
  ```python
101
  # pip install accelerate transformers bitsandbytes
 
95
 
96
  ## Running the model
97
 
98
+ For more efficient memory usage, we advise you to load the model in `8bit` using `load_in_8bit` flag as follows (works only under GPU):
99
 
100
  ```python
101
  # pip install accelerate transformers bitsandbytes