giux78 commited on
Commit
65bfe7b
1 Parent(s): a2caed4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - wikimedia/wikipedia
5
+ - oscar
6
+ language:
7
+ - it
8
+ pipeline_tag: text-generation
9
+ ---
10
+ <img src="https://hoodie-creator.s3.eu-west-1.amazonaws.com/15be78c6-original.png" alt="zefiro" border="0" width="400px">
11
+
12
+
13
+
14
+ # Model Card for zefiro-base-7b-ITA
15
+ *Last Update: 20/02/2024*<br>
16
+
17
+
18
+ <!-- Provide a quick summary of what the model is/does. -->
19
+
20
+ Zefiro base is a continual pretrained model for the Italian language based on [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) trained on
21
+ a subset of the Italian subdataset of Oscar and wikipedia dataset.
22
+
23
+
24
+ ## Model Details
25
+
26
+ Zefiro base is a continual pre-trained language model started from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model to the italian language.
27
+
28
+
29
+ ## Model description
30
+
31
+ - **Model type:** A 7B parameter GPT-like continual pre-trained model from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
32
+ - **Language(s) (NLP):** Primarily Italian
33
+ - **License:** Apache 2
34
+ - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
35
+ - **Developed by:** [giux78](https://alessandroercolani.webflow.io/)
36
+ - **Funded by:** [Business Operating System](https://www.businessos.xyz)
37
+
38
+ ## Evaluations:
39
+
40
+ | Model | Arc-c | HellaS | MMUL | AVG |
41
+ | --- | --- | --- | --- | --- |
42
+ | Mixtral 7x8 | 52.8 | 75.1 | 70.9 | 66.26666667 |
43
+ | LLama2 70b | 49.4 | 70.9 | 65.1 | 61.8 |
44
+ | zefiro-dpo-7b | 52.69 | 67.09 | 50.8 | 56.86 |
45
+ | **zefiro-base-7b** | **51.07** | **63.47** | **52.97** | **55.83666667** |
46
+ | zefiro-sft-7b | 50.98 | 62.71 | 51.96 | 55.21666667 |
47
+ | LLama1 34B | 42.9 | 65.4 | 49.0 | 52.43333333 |
48
+
49
+
50
+ ## Intended uses & limitations
51
+
52
+ Here's how you can run the model using Transformers from 🤗 :
53
+
54
+ ```python
55
+ # Install transformers from source - only needed for versions <= v4.34
56
+ # pip install git+https://github.com/huggingface/transformers.git
57
+ # pip install accelerate
58
+ from transformers import AutoModelForCausalLM, AutoTokenizer
59
+
60
+ model_id = "mii-community/zefiro-7b-base-ITA"
61
+ model = AutoModelForCausalLM.from_pretrained(model_id)
62
+ model.to('cuda')
63
+ tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left")
64
+
65
+
66
+ sys_prompt = "Sei un assistente disponibile, rispettoso e onesto. " \
67
+ "Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \
68
+ "Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
69
+ "Assicurati che le tue risposte siano socialmente imparziali e positive. " \
70
+ "Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
71
+ "Se non conosci la risposta a una domanda, non condividere informazioni false."
72
+
73
+ messages = [{ 'content' : sys_prompt, 'role' : 'assistant'},
74
+ {'content' : 'Crea una lista su cosa mangiare a pranzo ogni giorno della settimana a pranzo e cena', 'role' : 'user'}]
75
+
76
+
77
+ def generate_text(sys_prompt, user_prompt):
78
+ messages = [{ 'content' : sys_prompt, 'role' : 'assistant'},
79
+ {'content' : user_prompt, 'role' : 'user'}]
80
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
81
+ model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
82
+ generated_ids = model.generate(**model_inputs, max_new_tokens=1024)
83
+ return tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
84
+
85
+
86
+ generate_text(sys_prompt, 'cosa ne pensi della politica italiana?')
87
+ ```
88
+
89
+ ## Bias, Risks, and Limitations
90
+
91
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
92
+
93
+ Zefiro-7b-base-ITA has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
94
+ It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
95
+
96
+
97
+
98
+ ### Training Data
99
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
100
+ We used a subset of the italian version of [Oscar](https://huggingface.co/datasets/oscar) and [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) a as training data.
101
+
102
+ #### Summary
103
+ Zefiro-7b-beta-ITA-v0.1 is a continula pre-trained version of mistral-7b for the italian language.
104
+
105
+ ## Citation
106
+
107
+ ```
108
+ @misc{tunstall2023zephyr,
109
+ title={Zephyr: Direct Distillation of LM Alignment},
110
+ author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
111
+ year={2023},
112
+ eprint={2310.16944},
113
+ archivePrefix={arXiv},
114
+ primaryClass={cs.LG}
115
+ }
116
+
117
+ @misc{basile2023llamantino,
118
+ title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language},
119
+ author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro},
120
+ year={2023},
121
+ eprint={2312.09993},
122
+ archivePrefix={arXiv},
123
+ primaryClass={cs.CL}
124
+ }
125
+
126
+ ```
127
+
128
+
129
+ ## Model Card Authors
130
+
131
+ [giux78](https://huggingface.co/giux78)
132
+
133
+ ## Model Card Contact
134
+
135
+ **ale.ercolani@gmail.com