Commit
·
24a7b71
1
Parent(s):
dd53ae4
Update README.md
Browse files
README.md
CHANGED
@@ -96,8 +96,10 @@ If you are facing issues when loading the model, you can try to load it **Quanti
|
|
96 |
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
|
97 |
```
|
98 |
|
99 |
-
*Note*:
|
100 |
-
|
|
|
|
|
101 |
## Evaluation
|
102 |
|
103 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
|
|
96 |
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
|
97 |
```
|
98 |
|
99 |
+
*Note*:
|
100 |
+
1) The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries
|
101 |
+
2) The Tokenizer, by default, adds at the beginning of the prompt the <BOS> token. If that is not the case, add as a starting token the *<s>* string.
|
102 |
+
|
103 |
## Evaluation
|
104 |
|
105 |
<!-- This section describes the evaluation protocols and provides the results. -->
|