efederici commited on
Commit
9a5b634
1 Parent(s): a83dc4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md CHANGED
@@ -1,3 +1,101 @@
1
  ---
 
 
2
  license: cc-by-nc-4.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - it
4
  license: cc-by-nc-4.0
5
+ tags:
6
+ - sft
7
+ - it
8
+ - mistral
9
+ - chatml
10
+ - axolotl
11
+ prompt_template: <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|>
12
+ <|im_start|>assistant
13
+ model-index:
14
+ - name: maestrale-chat-v0.3-beta
15
+ results: []
16
  ---
17
+
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/dgSNbTl.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
+ </div>
21
+ <div style="display: flex; justify-content: space-between; width: 100%;">
22
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
24
+ </div>
25
+ </div>
26
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
27
+ <!-- header end -->
28
+
29
+ # Maestrale chat beta ༄
30
+
31
+ By @efederici and @mferraretto
32
+
33
+ ## Model description
34
+
35
+ - **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus.
36
+ - **Fine-Tuning**: SFT performed on convs/instructions for three epochs.
37
+
38
+ **v0.3**
39
+ - Function calling
40
+ - Reduced default system prompt to avoid wasting tokens (pre-alignment)
41
+
42
+ This model uses ChatML prompt format:
43
+ ```
44
+ <|im_start|>system
45
+ Sei un assistente utile.<|im_end|>
46
+ <|im_start|>user
47
+ {prompt}<|im_end|>
48
+ <|im_start|>assistant
49
+ ```
50
+
51
+ ## Usage:
52
+ ```python
53
+ from transformers import (
54
+ AutoTokenizer,
55
+ AutoModelForCausalLM,
56
+ GenerationConfig,
57
+ TextStreamer
58
+ )
59
+ import torch
60
+
61
+ tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.3-beta")
62
+ model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.3-beta", load_in_8bit=True, device_map="auto")
63
+
64
+ gen = GenerationConfig(
65
+ do_sample=True,
66
+ temperature=0.7,
67
+ repetition_penalty=1.2,
68
+ top_k=50,
69
+ top_p=0.95,
70
+ max_new_tokens=500,
71
+ pad_token_id=tokenizer.eos_token_id,
72
+ eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
73
+ )
74
+
75
+ messages = [
76
+ {"role": "system", "content": "Sei un assistente utile."},
77
+ {"role": "user", "content": "{prompt}"}
78
+ ]
79
+
80
+ with torch.no_grad(), torch.backends.cuda.sdp_kernel(
81
+ enable_flash=True,
82
+ enable_math=False,
83
+ enable_mem_efficient=False
84
+ ):
85
+ temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
86
+ inputs = tokenizer(temp, return_tensors="pt").to("cuda")
87
+
88
+ streamer = TextStreamer(tokenizer, skip_prompt=True)
89
+
90
+ _ = model.generate(
91
+ **inputs,
92
+ streamer=streamer,
93
+ generation_config=gen
94
+ )
95
+ ```
96
+
97
+ ## Intended uses & limitations
98
+
99
+ It's a beta sft version, but it's not `aligned`. It's a first test. We are working on alignment data and evals.
100
+
101
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)