Update README.md
Browse files
README.md
CHANGED
@@ -21,8 +21,15 @@ Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict
|
|
21 |
This is the low-rank adapter which runs with unsloth on a single 80GB GPU.
|
22 |
|
23 |
```python
|
24 |
-
import
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
```
|
27 |
|
28 |
Alternatively, you can also directly use the [merged model](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B).
|
|
|
21 |
This is the low-rank adapter which runs with unsloth on a single 80GB GPU.
|
22 |
|
23 |
```python
|
24 |
+
from unsloth import FastLanguageModel
|
25 |
+
|
26 |
+
model_name = "marcelbinz/Llama-3.1-Centaur-70B-adapter"
|
27 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
28 |
+
model_name = args.model,
|
29 |
+
max_seq_length = 32768,
|
30 |
+
dtype = None,
|
31 |
+
load_in_4bit = True,
|
32 |
+
)
|
33 |
```
|
34 |
|
35 |
Alternatively, you can also directly use the [merged model](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B).
|