MaziyarPanahi commited on
Commit
89eeeeb
1 Parent(s): 0b5f225

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen-research
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
5
+ language:
6
+ - fr
7
+ - en
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - chat
11
+ - qwen
12
+ - qwen2.5
13
+ - finetune
14
+ - french
15
+ - english
16
+ library_name: transformers
17
+ inference: false
18
+ model_creator: MaziyarPanahi
19
+ quantized_by: MaziyarPanahi
20
+ base_model: Qwen/Qwen2.5-3B
21
+ model_name: calme-3.1-instruct-3b
22
+ datasets:
23
+ - MaziyarPanahi/french_instruct_sharegpt
24
+ ---
25
+
26
+ <img src="./calme_3.png" alt="Calme-3 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
27
+
28
+ # MaziyarPanahi/calme-3.1-instruct-3b
29
+
30
+ This model is an advanced iteration of the powerful `Qwen/Qwen2.5-3B`, specifically fine-tuned to enhance its capabilities in generic domains.
31
+
32
+
33
+ # ⚡ Quantized GGUF
34
+
35
+ All GGUF models are available here: [MaziyarPanahi/calme-3.1-instruct-3b-GGUF](https://huggingface.co/MaziyarPanahi/calme-3.1-instruct-3b-GGUF)
36
+
37
+
38
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
39
+
40
+ Leaderboard 2 coming soon!
41
+
42
+
43
+ # Prompt Template
44
+
45
+ This model uses `ChatML` prompt template:
46
+
47
+ ```
48
+ <|im_start|>system
49
+ {System}
50
+ <|im_end|>
51
+ <|im_start|>user
52
+ {User}
53
+ <|im_end|>
54
+ <|im_start|>assistant
55
+ {Assistant}
56
+ ````
57
+
58
+ # How to use
59
+
60
+
61
+ ```python
62
+
63
+ # Use a pipeline as a high-level helper
64
+
65
+ from transformers import pipeline
66
+
67
+ messages = [
68
+ {"role": "user", "content": "Who are you?"},
69
+ ]
70
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-3.1-instruct-3b")
71
+ pipe(messages)
72
+
73
+
74
+ # Load model directly
75
+
76
+ from transformers import AutoTokenizer, AutoModelForCausalLM
77
+
78
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-3.1-instruct-3b")
79
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-3.1-instruct-3b")
80
+ ```
81
+
82
+
83
+
84
+ # Ethical Considerations
85
+
86
+ As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.