RichardErkhov's picture
uploaded readme
af0f644 verified
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-Ita-7b - GGUF
- Model creator: https://huggingface.co/DeepMount00/
- Original model: https://huggingface.co/DeepMount00/Mistral-Ita-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-Ita-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral-Ita-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Mistral-Ita-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Mistral-Ita-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral-Ita-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Mistral-Ita-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral-Ita-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral-Ita-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Mistral-Ita-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral-Ita-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral-Ita-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral-Ita-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral-Ita-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral-Ita-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral-Ita-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral-Ita-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral-Ita-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral-Ita-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral-Ita-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral-Ita-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral-Ita-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/DeepMount00_-_Mistral-Ita-7b-gguf/blob/main/Mistral-Ita-7b.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
language:
- it
license: apache-2.0
tags:
- text-generation-inference
- text generation
datasets:
- DeepMount00/llm_ita_ultra
---
# Mistral-7B-v0.1 for Italian Language Text Generation
## Model Architecture
- **Base Model:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Specialization:** Italian Language
## Evaluation
For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard).
Here's a breakdown of the performance metrics:
| Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------------|:----------------------|:----------------|:---------------------|:--------|
| **Accuracy Normalized** | 0.6731 | 0.5502 | 0.5364 | 0.5866 |
---
**Quantized 4-Bit Version Available**
A quantized 4-bit version of the model is available for use. This version offers a more efficient processing capability by reducing the precision of the model's computations to 4 bits, which can lead to faster performance and decreased memory usage. This might be particularly useful for deploying the model on devices with limited computational power or memory resources.
For more details and to access the model, visit the following link: [Mistral-Ita-7b-GGUF 4-bit version](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF).
---
## How to Use
How to utilize my Mistral for Italian text generation
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
MODEL_NAME = "DeepMount00/Mistral-Ita-7b"
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16).eval()
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
def generate_answer(prompt):
messages = [
{"role": "user", "content": prompt},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=200, do_sample=True,
temperature=0.001, eos_token_id=tokenizer.eos_token_id)
decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return decoded[0]
prompt = "Come si apre un file json in python?"
answer = generate_answer(prompt)
print(answer)
```
---
## Developer
[Michele Montebovi]