osiria commited on
Commit
b892b1c
1 Parent(s): 2978802

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -29,10 +29,23 @@ The model has ~354M parameters and a vocabulary of 50.335 tokens. It is a founda
29
 
30
  <h3>Quick usage</h3>
31
 
32
- In order to use the model for inference, the following pipeline is needed:
33
 
34
  ```python
 
 
 
35
 
 
 
 
 
 
 
 
 
 
 
36
  ```
37
 
38
 
 
29
 
30
  <h3>Quick usage</h3>
31
 
32
+ In order to use the model for inference on GPU, the following pipeline is needed:
33
 
34
  ```python
35
+ from transformers import AutoTokenizer, AutoModelForCausalLM
36
+ import torch
37
+ from transformers import pipeline
38
 
39
+ tokenizer = AutoTokenizer.from_pretrained("osiria/diablo-italian-base-354m")
40
+ model = AutoModelForCausalLM.from_pretrained("osiria/diablo-italian-base-354m", torch_dtype=torch.float16)
41
+
42
+ device = torch.device("cuda")
43
+ model = model.to(device)
44
+
45
+ pipeline_nlg = pipeline("text-generation", model = model, tokenizer = tokenizer, device = 0)
46
+ pipeline_nlg("Ciao, mi chiamo Marco Rossi e")
47
+
48
+ # [{'generated_text': 'Ciao, mi chiamo Marco Rossi e sono un ragazzo di 23 anni.'}]
49
  ```
50
 
51