MaziyarPanahi commited on
Commit
11f7295
1 Parent(s): 68516fa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ library_name: transformers
6
+ license_name: tongyi-qianwen
7
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
8
+ tags:
9
+ - chat
10
+ - qwen
11
+ - qwen2
12
+ - finetune
13
+ - chatml
14
+ base_model: Qwen/Qwen2.5-72B
15
+ datasets:
16
+ - MaziyarPanahi/truthy-dpo-v0.1-axolotl
17
+ model_name: calme-2.1-qwen2.5-72b
18
+ pipeline_tag: text-generation
19
+ inference: false
20
+ model_creator: MaziyarPanahi
21
+ ---
22
+
23
+ <img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
24
+
25
+ # MaziyarPanahi/calme-2.1-qwen2.5-72b
26
+
27
+ This model is a fine-tuned version of the powerful `Qwen/Qwen2.5-72B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
28
+
29
+
30
+
31
+ ## Use Cases
32
+
33
+ This model is suitable for a wide range of applications, including but not limited to:
34
+
35
+ - Advanced question-answering systems
36
+ - Intelligent chatbots and virtual assistants
37
+ - Content generation and summarization
38
+ - Code generation and analysis
39
+ - Complex problem-solving and decision support
40
+
41
+ # ⚡ Quantized GGUF
42
+
43
+ coming soon.
44
+
45
+
46
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
47
+
48
+ coming soon.
49
+
50
+ # Prompt Template
51
+
52
+ This model uses `ChatML` prompt template:
53
+
54
+ ```
55
+ <|im_start|>system
56
+ {System}
57
+ <|im_end|>
58
+ <|im_start|>user
59
+ {User}
60
+ <|im_end|>
61
+ <|im_start|>assistant
62
+ {Assistant}
63
+ ````
64
+
65
+ # How to use
66
+
67
+
68
+ ```python
69
+
70
+ # Use a pipeline as a high-level helper
71
+
72
+ from transformers import pipeline
73
+
74
+ messages = [
75
+ {"role": "user", "content": "Who are you?"},
76
+ ]
77
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.1-qwen2.5-72b")
78
+ pipe(messages)
79
+
80
+
81
+ # Load model directly
82
+
83
+ from transformers import AutoTokenizer, AutoModelForCausalLM
84
+
85
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.1-qwen2.5-72b")
86
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.1-qwen2.5-72b")
87
+ ```
88
+
89
+
90
+ # Ethical Considerations
91
+
92
+ As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.