updated model card: removed token parameter from example
Browse files
README.md
CHANGED
@@ -78,7 +78,7 @@ print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1
|
|
78 |
If you are facing issues when loading the model, you can try to load it quantized:
|
79 |
|
80 |
```python
|
81 |
-
model = AutoModelForCausalLM.from_pretrained(model_id,
|
82 |
```
|
83 |
|
84 |
*Note*: The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries
|
|
|
78 |
If you are facing issues when loading the model, you can try to load it quantized:
|
79 |
|
80 |
```python
|
81 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
|
82 |
```
|
83 |
|
84 |
*Note*: The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries
|