AIEIR commited on
Commit
fae3bb3
·
verified ·
1 Parent(s): f5ffc5c
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -76,10 +76,61 @@ Github : {}
76
  - **GPT-4o**: 6.38
77
  - **Eir AI-8B**: **7.11**
78
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  Llama-3.1-EIRAI-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
80
  * [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
81
  * [Llama-3.1-8B-instruction5_r256_e4-slerp-0.5](https://huggingface.co/Llama-3.1-8B-instruction5_r256_e4-slerp-0.5)
82
 
 
 
 
 
 
83
  ## 🧩 Configuration
84
 
85
  \```yamlslices:
 
76
  - **GPT-4o**: 6.38
77
  - **Eir AI-8B**: **7.11**
78
 
79
+
80
+
81
+ # Prompt Template
82
+
83
+ This model uses `ChatML` prompt template:
84
+
85
+ ```
86
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
87
+
88
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
89
+
90
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
91
+
92
+ ````
93
+
94
+ # Example
95
+
96
+
97
+
98
+
99
+ # How to use
100
+
101
+
102
+ ```python
103
+
104
+ # Use a pipeline as a high-level helper
105
+
106
+ from transformers import pipeline
107
+
108
+ messages = [
109
+ {"role": "user", "content": "Who are you?"},
110
+ ]
111
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.3-legalkit-8b")
112
+ pipe(messages)
113
+
114
+
115
+ # Load model directly
116
+
117
+ from transformers import AutoTokenizer, AutoModelForCausalLM
118
+
119
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.3-legalkit-8b")
120
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.3-legalkit-8b")
121
+ ```
122
+
123
+
124
+
125
  Llama-3.1-EIRAI-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
126
  * [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
127
  * [Llama-3.1-8B-instruction5_r256_e4-slerp-0.5](https://huggingface.co/Llama-3.1-8B-instruction5_r256_e4-slerp-0.5)
128
 
129
+
130
+
131
+
132
+
133
+
134
  ## 🧩 Configuration
135
 
136
  \```yamlslices: