File size: 15,545 Bytes
bd6a9cd bd71637 bd6a9cd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 |
---
license: apache-2.0
pipeline_tag: text-generation
language:
- it
- en
tags:
- sft
- dpo
base_model:
- sapienzanlp/Minerva-7B-base-v1.0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
- Babelscape/ALERT
- efederici/evol-dpo-ita
inference:
parameters:
temperature: 0.4
do_sample: true
widget:
- text: Chi sei?
example_title: Example 1
library_name: transformers
---
<div style="text-align: center; display: flex; flex-direction: column; align-items: center;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/5f0b462819cb630495b814d7/DVA4MnFUs3UHBnTrX9jG6.png" style="max-width: 550px; height: auto;">
</div>
# Model Card for Minerva-7B-cpt-v1.0-dpo
Minerva is the first family of **LLMs pretrained from scratch on Italian** developed by [Sapienza NLP](https://nlp.uniroma1.it)
in collaboration with [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) and [CINECA](https://www.cineca.it/).
Notably, the Minerva models are truly-open (data and model) Italian-English LLMs, with approximately half of the pretraining data
including Italian text.
* [Minerva LLMs - website](https://nlp.uniroma1.it/minerva/)
## Description
This is the model card for **Minerva-7B-base-v1.0-dpo**, a 7 billion parameter model trained on almost 2.5 trillion tokens (1.14 trillion in Italian,
1.14 trillion in English, and 200 billion in code).
This model is part of the Minerva LLM family:
* [Minerva-350M-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-350M-base-v1.0)
* [Minerva-1B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-1B-base-v1.0)
* [Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0)
* [Minerva-7B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-7B-base-v1.0-1110)
* [Minerva-7B-base-v1.0-sft](https://huggingface.co/sapienzanlp/Minerva-7B-cpt-v1.0-mixed_recipe7-3epochs-safety-handcraft)
* [Minerva-7B-base-v1.0-dpo](https://huggingface.co/sapienzanlp/Minerva-7B-cpt-v1.0-mixed_recipe7-3epochs-safety-handcraft-DPO-alert-uf-evol-temp)
## 🚨⚠️🚨 Bias, Risks, and Limitations 🚨⚠️🚨
*This section identifies foreseeable harms and misunderstandings.*
This is a chat foundation model, subject to some degree of alignment. However the model may still:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Racist and sexist content
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information or historical facts as if it were factual
- Generate irrelevant or repetitive outputs
We are aware of the biases and potential problematic/toxic content that current pretrained large language models exhibit: more specifically, as probabilistic models of (Italian and English) languages, they reflect and amplify the biases of their training data.
For more information about this issue, please refer to our survey:
* [Biases in Large Language Models: Origins, Inventory, and Discussion](https://dl.acm.org/doi/full/10.1145/3597307)
## How to use Minerva with Hugging Face transformers
```python
import transformers
import torch
model_id = "sapienzanlp/Minerva-7B-base-v1.0"
# Initialize the pipeline.
pipeline = transformers.pipeline(
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
# Input text for the model.
input_conv = [{"role": "user", "content": "Qualle è la capitale dell'Italia?"}]
# Compute the outputs.
output = pipeline(
input_conv,
max_new_tokens=128,
)
output
```
[{'generated_text': "La capitale dell'Italia è la città di Roma, che si trova a [...]"}]
## Model Architecture
Minerva-7B-base-v1.0 is a Transformer model based on the Mistral architecture.
Please look at the configuration file for a detailed breakdown of the hyperparameters we chose for this model.
The Minerva LLM family is composed of:
| Model Name | Tokens | Layers | Hidden Size | Attention Heads | KV Heads | Sliding Window | Max Context Length |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Minerva-350M-base-v1.0 | 70B (35B it + 35B en) | 16 | 1152 | 16 | 4 | 2048 | 16384 |
| Minerva-1B-base-v1.0 | 200B (100B it + 100B en) | 16 | 2048 | 16 | 4 | 2048 | 16384 |
| Minerva-3B-base-v1.0 | 660B (330B it + 330B en) | 32 | 2560 | 32 | 8 | 2048 | 16384 |
| Minerva-7B-base-v1.0 | 2.48T (1.14T it + 1.14T en + 200B code) | 32 | 4096 | 32 | 8 | None | 4096 |
## Model Training
Minerva-7B-base-v1.0 was trained using [llm-foundry 0.8.0](https://github.com/riccorl/llm-foundry) from [MosaicML](https://mosaicml.com/). The hyperparameters used are the following:
| Model Name | Optimizer | lr | betas | eps | weight decay | Scheduler | Warmup Steps | Batch Size (Tokens) | Total Steps |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Minerva-350M-base-v1.0 | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 16,690 |
| Minerva-1B-base-v1.0 | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 47,684 |
| Minerva-3B-base-v1.0 | Decoupled AdamW | 2e-4 | (0.9, 0.95) | 1e-8 | 0.0 | Cosine | 2% | 4M | 157,357 |
| Minerva-7B-base-v1.0 | AdamW | 3e-4 | (0.9, 0.95) | 1e-5 | 0.1 | Cosine | 2000 | 4M | 591,558 |
### SFT Training
The SFT model was trained using [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory). The data mix was the following:
| Dataset | Source | Code | English | Italian |
|--------------------------------------|------------------------------------------------------------------------|----------|---------|---------|
| Alpaca-cleaned | [Link](https://huggingface.co/datasets/yahma/alpaca-cleaned) | 0 | 50,000 | 0 |
| Databricks-dolly-15k | [Link](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | 0 | 15,011 | 0 |
| No-robots | [Link](https://huggingface.co/datasets/HuggingFaceH4/no_robots) | 0 | 9,499 | 0 |
| OASST2 | [Link](https://huggingface.co/datasets/OpenAssistant/oasst2) | 0 | 29,000 | 528 |
| Tower-blocks_it | [Link](https://huggingface.co/datasets/sapienzanlp/tower_blocks-v0.2_it) | 0 | 0 | 7,276 |
| Glaive-code-assistant | [Link](https://huggingface.co/datasets/glaiveai/glaive-code-assistant) | 100,000 | 0 | 0 |
| Alpaca-python | [Link](https://huggingface.co/datasets/Vezora/Tested-143k-Python-Alpaca) | 20,000 | 0 | 0 |
| WizardLM | [Link](https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_70k) | 0 | 29,810 | 0 |
| LIMA | [Link](https://huggingface.co/datasets/GAIR/lima?row=0) | 0 | 1,000 | 0 |
| OPENORCA | [Link](https://huggingface.co/datasets/Open-Orca/OpenOrca) | 0 | 30,000 | 0 |
| Ultrachat | [Link](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | 0 | 50,000 | 0 |
| MagpieMT | [Link](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-MT-300K-v0.1) | 0 | 30,000 | 0 |
| Tulu-V2-Science | [Link](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) | 0 | 7,000 | 0 |
| Bactrian-X | [Link](https://huggingface.co/datasets/MBZUAI/Bactrian-X) | 0 | 0 | 67,000 |
| Magpie (*Translated by us*) | - | 0 | 0 | 60,000 |
| Everyday-conversations (*Translated by us*) | - | 0 | 0 | 2,260 |
| Aya_datasets | [Link](http://CohereForAI/aya_dataset) | 0 | 3,944 | 738 |
| alpaca-gpt4-it | [Link](https://huggingface.co/datasets/efederici/alpaca-gpt4-it) | 0 | 0 | 15,000 |
| capybara-claude-15k-ita | [Link](https://huggingface.co/datasets/efederici/capybara-claude-15k-ita) | 0 | 0 | 15,000 |
| Wildchat | [Link](https://huggingface.co/datasets/allenai/WildChat-1M) | 0 | 0 | 5,000 |
| GPT4_INST | [Link](https://huggingface.co/datasets/DeepMount00/GPT-4o-ITA-INSTRUCT) | 0 | 0 | 10,000 |
| Safety Italian | - | 0 | 0 | 21,000 |
| Handmade Italian | - | 0 | 0 | 2,000 |
For more details please check [our tech report](https://nlp.uniroma1.it/minerva/blog#from-a-base-model-to-an-instruct-model).
### Online DPO Training
This model card is for our DPO model. Direct Preference Optimization (DPO) is a method that refines models based on user feedback, similar to Reinforcement Learning from Human Feedback (RLHF), but without the complexity of reinforcement learning. Online DPO further improves this by allowing real-time adaptation during training, continuously refining the model with new feedback. For training this model, we used the [Hugging Face TRL](https://github.com/huggingface/trl) library and Online DPO, with the [Skywork/Skywork-Reward-Llama-3.1-8B-v0.2](https://huggingface.co/Skywork/Skywork-Reward-Llama-3.1-8B-v0.2) model as the judge to evaluate and guide optimization. For this stage we used just the prompts from HuggingFaceH4/ultrafeedback_binarized (English), efederici/evol-dpo-ita (Italian) and Babelscape/ALERT translated to Italian were used, with additional manually curated data for safety.
For more details please check [our tech report](https://nlp.uniroma1.it/minerva/blog#from-a-base-model-to-an-instruct-model).
## Model Evaluation
We assessed our model using the [LM-Evaluation-Harness](https://github.com/EleutherAI/lm-evaluation-harness) library, which serves as a comprehensive framework for testing generative language models across a wide range of evaluation tasks.
All the reported benchmark data was already present in the LM-Evaluation-Harness suite.
_Scores will be available at later stage._
<!-- **Italian** Data: -->
<!-- | Task | Accuracy |
| --- | --- | -->
<!-- | [xcopa](https://huggingface.co/datasets/xcopa) (0-shot) | 0.694 |
| [Hellaswag](https://huggingface.co/datasets/alexandrainst/m_hellaswag) (5-shot) | 0.5293 |
| [Belebele](https://huggingface.co/datasets/facebook/belebele) (5-shot) | 0.2333 |
| [TruthfulQA MC 1](https://huggingface.co/datasets/alexandrainst/m_truthfulqa) (0-shot) | 0.2363 |
| [TruthfulQA MC 2](https://huggingface.co/datasets/alexandrainst/m_truthfulqa) (0-shot) | 0.3731 |
| [M MMLU](https://huggingface.co/datasets/alexandrainst/m_mmlu) (5-shot) | 0.2612 |
| [arc challenge](https://huggingface.co/datasets/alexandrainst/m_arc) (5-shot) | 0.3268 | -->
<!-- **English** Data: -->
<!-- | Task | Accuracy |
| --- | --- | -->
<!-- | [Hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) (5-shot) | 0.6168 |
| [piqa](https://huggingface.co/datasets/piqa) (5-shot) | 0.7535 |
| [sciq](https://huggingface.co/datasets/sciq) (5-shot) | 0.925 |
| [Belebele](https://huggingface.co/datasets/facebook/belebele) (5-shot) | 0.2278 |
| [TruthfulQA MC 1](https://huggingface.co/datasets/truthful_qa) (0-shot) | 0.2142 |
| [TruthfulQA MC 2](https://huggingface.co/datasets/truthful_qa) (0-shot) | 0.3643 |
| [M MMLU](https://huggingface.co/datasets/alexandrainst/m_mmlu) (5-shot) | 0.263 |
| [arc challenge](allenai/ai2_arc) (5-shot) | 0.3319 |
| [arc easy](allenai/ai2_arc) (5-shot) | 0.6540 | -->
<!-- ## Training Data
Minerva-7B-base-v1.0 is trained on 1.14T Italian tokens, 1.14T English tokens, and 200B code tokens.
The training data is a mixture of the following datasets:
| Dataset | Tokens | Language | Epochs |
| --- | --- | --- | --- |
| RedPajama-Data-V2 | 687,952,502,784 | Italian | 1.3 |
| CulturaX | 158,201,876,480 | Italian | 1.5 |
| Wikipedia | 1,265,135,616 | Italian | 1.0 |
| Gutenberg/Wikisource | 147,017,728 | Italian | 2.0 |
| EurLex | 1,647,013,888 | Italian | 1.0 |
| Gazzetta Ufficiale | 1,654,013,952| Italian | 1.0 |
| FineWeb | 1,076,406,624,256 | English | 1.0 |
| Wikipedia | 5,259,501,568 | English | 1.0 |
| ArXiv | 33,231,106,048 | English | 1.0 |
| Gutenberg | 6,947,893,248 | English | 1.0 |
| StackExchange | 22,069,268,480 | English | 1.0 |
| The Stack V2 | 200,754,900,992 | Code | 1.0 | -->
<!-- We have extracted some statistics on Italian (115B tokens) and English (210B tokens) documents from CulturaX on the selected sources:
*Proportion of number of tokens per domain (Italian)*
<img src="https://github.com/Andrew-Wyn/images/blob/master/minerva/top_25_url_tokens_proportion_culturax_it.png?raw=true" alt="italian-tok-counts" border="0" width="1800px">
*Proportion of number of tokens per domain (English)*
<img src="https://github.com/Andrew-Wyn/images/blob/master/minerva/top_25_url_tokens_proportion_culturax_en.png?raw=true" alt="english-tok-counts" border="0" width="1800px">
-->
## Tokenizer Fertility
The tokenizer fertility measures the average amount of tokens produced per tokenized word.
A tokenizer displaying high fertility values in a particular language typically indicates that it segments words in that language extensively.
The tokenizer fertility is strictly correlated with the inference speed of the model with respect to a specific language,
as higher values mean longer sequences of tokens to generate and thus lower inference speed.
**Fertility computed over a sample of Cultura X (CX) data and Wikipedia (Wp):**
| Model | Voc. Size | Fertility IT (CX) | Fertility EN (CX) | Fertility IT (Wp) | Fertility EN (Wp) |
| --- | --- | --- |--- | --- |--- |
| Mistral-7B-v0.1 | 32000 | 1.87 | 1.32 | 2.05 | 1.57 |
| gemma-7b | 256000 | 1.42 | 1.18 | 1.56 | 1.34 |
| Minerva-3B-base-v1.0 | 32768 | 1.39 | 1.32 | 1.66 | 1.59 |
| Minerva-7B-base-v1.0 | 51200 | 1.32 | 1.26 | 1.56 | 1.51 |
<!-- ## Notice
Minerva-7B-base-v1.0 is a pretrained base model and, therefore, has no moderation mechanisms.
-->
## The Sapienza NLP Team
* **Riccardo Orlando:** data preprocessing, model training
* **Pere-Lluis Huguet Cabot:** data preprocessing, vocabulary, evaluation
* **Luca Moroni:** data curation, data analysis, downstream tasks, evaluation
* **Simone Conia:** data curation, evaluation, project supervision
* **Edoardo Barba:** data preprocessing, downstream tasks, project supervision
* **Roberto Navigli:** project coordinator
### Special thanks for their support
* Giuseppe Fiameni, Nvidia
* Sergio Orlandini, CINECA
## Acknowledgments
This work was funded by the PNRR MUR project [PE0000013-FAIR](https://fondazione-fair.it).
We acknowledge the [CINECA](https://www.cineca.it) award "IscB_medit" under the ISCRA initiative, for the availability of high performance computing resources and support.
|