Update README.md
Browse files
README.md
CHANGED
@@ -19,14 +19,10 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig,
|
|
19 |
|
20 |
base_model = "bertin-project/bertin-gpt-j-6B-alpaca"
|
21 |
tokenizer = AutoTokenizer.from_pretrained(base_model)
|
22 |
-
model = AutoModelForCausalLM.from_pretrained(
|
23 |
-
base_model,
|
24 |
-
load_in_8bit=True,
|
25 |
-
device_map="auto",
|
26 |
-
)
|
27 |
```
|
28 |
|
29 |
-
|
30 |
|
31 |
```python
|
32 |
# Generate responses
|
|
|
19 |
|
20 |
base_model = "bertin-project/bertin-gpt-j-6B-alpaca"
|
21 |
tokenizer = AutoTokenizer.from_pretrained(base_model)
|
22 |
+
model = AutoModelForCausalLM.from_pretrained(base_model)
|
|
|
|
|
|
|
|
|
23 |
```
|
24 |
|
25 |
+
For generation, we can either use `pipeline()` or the model's `.generate()` method. Remember that the prompt needs a **Spanish** template:
|
26 |
|
27 |
```python
|
28 |
# Generate responses
|