|
--- |
|
license: apache-2.0 |
|
language: |
|
- it |
|
pipeline_tag: text-generation |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- mistral |
|
- trl |
|
- sft |
|
base_model: sapienzanlp/Minerva-3B-base-v1.0 |
|
datasets: |
|
- mchl-labs/stambecco_data_it |
|
widget: |
|
- text: "Di seguito è riportata un'istruzione che descrive un'attività, abbinata ad un input che fornisce ulteriore informazione. Scrivi una risposta che soddisfi adeguatamente la richiesta. \n### Istruzione:\nSuggerisci un'attività serale romantica\n\n### Input:\n\n### Risposta:" |
|
example_title: Example 1 |
|
--- |
|
|
|
# Model Card for Minerva-3B-Instruct-v1.0 |
|
|
|
Minerva-3B-Instruct-v1.0 is an instruction-tuned version of the Minerva-3B-base-v1.0 model, specifically fine-tuned for understanding and following instructions in Italian. |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
- **Developed by:** Walid Iguider |
|
- **Model type:** Instruction Tuned |
|
- **License:** cc-by-nc-sa-4.0 |
|
- **Finetuned from model:**: [Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0), developed by [Sapienza NLP](https://nlp.uniroma1.it) in collaboration with [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) and [CINECA](https://www.cineca.it/) |
|
|
|
## Evaluation |
|
|
|
For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard). |
|
|
|
Here's a breakdown of the performance metrics: |
|
| Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | |
|
|:----------------------------|:----------------------|:----------------|:---------------------|:--------| |
|
| **Accuracy Normalized** | 0.5187 | 0.3045 | 0.2612 | 0.361 | |
|
|
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
### Sample Code |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
|
import torch |
|
torch.random.manual_seed(0) |
|
# Run text generation pipeline with our next model |
|
prompt = """Di seguito è riportata un'istruzione che descrive un'attività, abbinata ad un input che fornisce |
|
ulteriore informazione. Scrivi una risposta che soddisfi adeguatamente la richiesta. |
|
|
|
### Istruzione: |
|
Suggerisci un'attività serale romantica |
|
|
|
### Input: |
|
|
|
|
|
### Risposta:""" |
|
|
|
model_id = "walid-iguider/Minerva-3B-Instruct-v1.0" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
device_map="cuda", |
|
torch_dtype="auto", |
|
trust_remote_code=True, |
|
) |
|
|
|
generation_args = { |
|
"max_new_tokens": 500, |
|
"return_full_text": False, |
|
"temperature": 0.0, |
|
"do_sample": False, |
|
} |
|
|
|
pipe = pipeline( |
|
"text-generation", |
|
model=model, |
|
tokenizer=tokenizer, |
|
) |
|
|
|
output = pipe(prompt, **generation_args) |
|
print(output[0]['generated_text']) |
|
``` |
|
|