JacopoAbate commited on
Commit
288d09c
1 Parent(s): b44058d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -12
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
 
16
  XXXX is an updated version of [Mistral-7B-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf), specifically fine-tuned with SFT and LoRA adjustments.
17
 
18
- - It's trained both on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
19
  - it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
20
 
21
  # Evaluation
@@ -29,26 +29,22 @@ We evaluated the model using the same test sets as used for the Open Ita LLM Lea
29
 
30
  ## Usage
31
 
32
- Be sure to have transformers, peft and sentencepiece installed
33
 
34
  ```python
35
- pip install transformers peft sentencepiece
36
  ```
37
 
38
  ```python
39
  from transformers import AutoModelForCausalLM, AutoTokenizer
40
- from peft import PeftModel, PeftConfig
41
 
42
- device = "cuda"
43
 
44
- config = PeftConfig.from_pretrained("MoxoffSpA/xxxx")
45
- model = AutoModelForCausalLM.from_pretrained("alpindale/Mistral-7B-v0.2-hf")
46
- tokenizer = AutoTokenizer.from_pretrained("alpindale/Mistral-7B-v0.2-hf")
47
-
48
- model = PeftModel.from_pretrained(model, "MoxoffSpA/xxxx")
49
 
50
  messages = [
51
- {"role": "user", "content": "Qual è il tuo piatto preferito??"},
52
  {"role": "assistant", "content": "Beh, ho un debole per una buona porzione di risotto allo zafferano. È un piatto che si distingue per il suo sapore ricco e il suo bellissimo colore dorato, rendendolo irresistibile!"},
53
  {"role": "user", "content": "Hai delle ricette con il risotto che consigli?"}
54
  ]
@@ -58,7 +54,7 @@ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
58
  model_inputs = encodeds.to(device)
59
  model.to(device)
60
 
61
- generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
62
  decoded = tokenizer.batch_decode(generated_ids)
63
  print(decoded[0])
64
  ```
 
15
 
16
  XXXX is an updated version of [Mistral-7B-v0.2](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf), specifically fine-tuned with SFT and LoRA adjustments.
17
 
18
+ - It's trained on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
19
  - it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
20
 
21
  # Evaluation
 
29
 
30
  ## Usage
31
 
32
+ Be sure to have transformers and torch installed
33
 
34
  ```python
35
+ pip install transformers torch
36
  ```
37
 
38
  ```python
39
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
40
 
41
+ device = "cuda" # change to cpu if you have no gpu
42
 
43
+ model = AutoModelForCausalLM.from_pretrained("MoxoffSpA/xxxx")
44
+ tokenizer = AutoTokenizer.from_pretrained("MoxoffSpA/xxxx")
 
 
 
45
 
46
  messages = [
47
+ {"role": "user", "content": "Qual è il tuo piatto preferito?"},
48
  {"role": "assistant", "content": "Beh, ho un debole per una buona porzione di risotto allo zafferano. È un piatto che si distingue per il suo sapore ricco e il suo bellissimo colore dorato, rendendolo irresistibile!"},
49
  {"role": "user", "content": "Hai delle ricette con il risotto che consigli?"}
50
  ]
 
54
  model_inputs = encodeds.to(device)
55
  model.to(device)
56
 
57
+ generated_ids = model.generate(model_inputs, max_new_tokens=250, do_sample=True)
58
  decoded = tokenizer.batch_decode(generated_ids)
59
  print(decoded[0])
60
  ```