File size: 2,371 Bytes
76616af
c57454b
 
 
 
 
 
 
 
 
 
 
 
8db71b8
c57454b
8db71b8
c57454b
 
 
 
 
 
 
 
 
8db71b8
c57454b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: apache-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
tags:
- Italian
- Qlora
- finetuning
- Text Generation
pipeline_tag: text-generation
---
Model Card for Loquace-Wizard-13B 

# 🇮🇹 Loquace-Wizard-13B v0.1 🇮🇹 

Loquace is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹

Loquace-Wizard-14B's peculiar features:

- The First 13B Specifically finetuned in Italian.
- Is pretty good a following istructions in Italian.
- Responds well to prompt-engineering.
- Works well in a RAG (Retrival Augmented Generation) setup.
- It has been trained on a relatively raw dataset [Loquace-102K](https://huggingface.co/datasets/cosimoiaia/Loquace-102k) using QLoRa and WizardLM-13B-Instruct as base.
- Training took only 8 hours on a 3090, costing a little more than <b>2 euro</b>! On [Genesis Cloud](https://gnsiscld.co/26qhlf) GPU.
- It is <b><i>Truly Open Source</i></b>: Model, Dataset and Code to replicate the results are completely released.
- Created in a garage in the south of Italy.

The Loquace Italian LLM models are created with the goal of democratizing AI and LLM in the Italian Landscape. 

<b>No more need for expensive GPU, large funding, Big Corporation or Ivory Tower Institution, just download the code and train on your dataset on your own PC (or a cheap and reliable cloud provider like [Genesis Cloud](https://gnsiscld.co/26qhlf) )</b>

### Fine-tuning Instructions:
The related code can be found at:
https://github.com/cosimoiaia/Loquace

## Inference:

```python
from transformers import LlamaForCausalLM, AutoTokenizer


def generate_prompt(instruction):    
   prompt = f"""### Instruction: {instruction}
   
### Response:
"""
   return prompt

model_name = "."

model = LlamaForCausalLM.from_pretrained(
   model_name,
   device_map="auto",
   torch_dtype=torch.bfloat16                
)

model.config.use_cache = True


tokenizer = AutoTokenizer.from_pretrained(model_name, add_eos_token=False)

prompt = generate_prompt("Chi era Dante Alighieri?")
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, do_sample = True, num_beams = 2, top_k=50, top_p= 0.95, max_new_tokens=2046, early_stopping = True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split("Response:")[1].strip())
```
## Model Author:
Cosimo Iaia <cosimo.iaia@gmail.com>