File size: 2,585 Bytes
9c5e882
 
 
b5d9e4d
9c5e882
 
 
d20fe65
 
 
 
 
50d08a3
d20fe65
71566a7
8f9d32f
465bde2
71566a7
 
 
 
 
 
 
 
 
d20fe65
8f9d32f
e015e6d
 
 
 
 
 
 
8f9d32f
d20fe65
 
 
9798446
3357384
9798446
2757a1f
9798446
284719e
9798446
284719e
2757a1f
284719e
9798446
284719e
3357384
 
 
 
 
 
 
 
 
 
 
 
9798446
48feee2
1a0ed32
48feee2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
language:
- it
license: apache-2.0
tags:
- text-generation-inference
- text generation
---

# Mistral-7B-v0.1 for Italian Language Text Generation

## Model Architecture
The Mistral-7B-v0.1 model is a transformer-based model that can handle a variety of tasks including but not limited to translation, summarization, and text completion. It's particularly designed for the Italian language.


## Evaluation

For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard).

Here's a breakdown of the performance metrics:

| Metric                      | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------------|:----------------------|:----------------|:---------------------|:--------|
| **Accuracy Normalized**     | 0.6734                | 0.5466          | 0.5334               | 0.5844  |

---


**Quantized 4-Bit Version Available**

A quantized 4-bit version of the model is available for use. This version offers a more efficient processing capability by reducing the precision of the model's computations to 4 bits, which can lead to faster performance and decreased memory usage. This might be particularly useful for deploying the model on devices with limited computational power or memory resources.

For more details and to access the model, visit the following link: [Mistral-Ita-7b-GGUF 4-bit version](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF).

---

## How to Use
How to utilize my Mistral for Italian text generation

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

MODEL_NAME = "DeepMount00/Mistral-Ita-7b"

model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16).eval()
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)

def generate_answer(prompt):
    messages = [
        {"role": "user", "content": prompt},
    ]
    model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
    generated_ids = model.generate(model_inputs, max_new_tokens=200, do_sample=True,
                                          temperature=0.001, eos_token_id=tokenizer.eos_token_id)
    decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
    return decoded[0]

prompt = "Come si apre un file json in python?"
answer = generate_answer(prompt)
print(answer)
```
---
## Developer
[Michele Montebovi]