File size: 2,579 Bytes
9c5e882
 
 
b5d9e4d
9c5e882
 
 
d20fe65
 
 
 
 
5cea8cb
d20fe65
 
5cea8cb
d20fe65
42f7935
 
 
2dd2e63
 
 
 
d20fe65
 
 
5cea8cb
d20fe65
 
 
 
9798446
 
 
 
 
 
 
 
 
 
 
 
574bc06
c8016b4
9798446
 
 
17bdf28
 
9798446
17bdf28
9798446
9c13ad9
9798446
9c13ad9
 
9798446
 
 
48feee2
1a0ed32
48feee2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
language:
- it
license: apache-2.0
tags:
- text-generation-inference
- text generation
---

# Mistral-7B-v0.1 for Italian Language Text Generation

## Overview
`Mistral-7B-v0.1` is a state-of-the-art Large Language Model (LLM) specifically pre-trained for generating text. With its 7 billion parameters, it's built to excel in benchmarks and outperforms even some larger models like the Llama 2 13B.

## Model Architecture
The Mistral-7B-v0.1 model is a transformer-based model that can handle a variety of tasks including but not limited to translation, summarization, and text completion. It's particularly designed for the Italian language and can be fine-tuned for specific tasks.

## Quantized version
[DeepMount00/Mistral-Ita-7b-GGUF](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF)

## Unique Features for Italian
- **Tailored Vocabulary**: The model's vocabulary is fine-tuned to encompass the nuances and diversity of the Italian language.
- **Enhanced Understanding**: Mistral-7B is specifically trained to grasp and generate Italian text, ensuring high linguistic and contextual accuracy.

## Capabilities
- **Vocabulary Size**: 32,000 tokens, allowing for a broad range of inputs and outputs.
- **Hidden Size**: 4,096 dimensions, providing rich internal representations.
- **Intermediate Size**: 14,336 dimensions, which contributes to the model's ability to process and generate complex sentences.

## How to Use
How to utilize my Mistral for Italian text generation

```python
import transformers
from transformers import TextStreamer
import torch

model_name = "DeepMount00/Mistral-Ita-7b"

tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = transformers.LlamaForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto").eval()

def stream(user_prompt):
    runtimeFlag = "cuda:0"
    system_prompt = ''
    B_INST, E_INST = "<s> [INST]", "[/INST]"
    prompt = f"{system_prompt}{B_INST}{user_prompt.strip()}\n{E_INST}"
    inputs = tokenizer([prompt], return_tensors="pt").to(runtimeFlag)
    streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
    _ = model.generate(**inputs, streamer=streamer, max_new_tokens=300, temperature=0.0001,
                        repetition_penalty=1.2, eos_token_id=2, do_sample=True, num_return_sequences=1)

domanda = """Scrivi una funzione python che moltiplica per 2 tutti i valori della lista:"""
contesto = """
[-5, 10, 15, 20, 25, 30, 35]
"""

prompt = domanda + "\n" + contesto

stream(prompt)
```
---
## Developer
[Michele Montebovi]