MaziyarPanahi
commited on
Commit
•
9020c07
1
Parent(s):
a584e3f
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ tags:
|
|
16 |
base_model: meta-llama/Meta-Llama-3-70B-Instruct
|
17 |
datasets:
|
18 |
- mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
|
19 |
-
model_name:
|
20 |
pipeline_tag: text-generation
|
21 |
license_name: llama3
|
22 |
license_link: LICENSE
|
@@ -24,7 +24,7 @@ inference: false
|
|
24 |
model_creator: MaziyarPanahi
|
25 |
quantized_by: MaziyarPanahi
|
26 |
model-index:
|
27 |
-
- name:
|
28 |
results:
|
29 |
- task:
|
30 |
type: text-generation
|
@@ -41,7 +41,7 @@ model-index:
|
|
41 |
value: 71.67
|
42 |
name: normalized accuracy
|
43 |
source:
|
44 |
-
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/
|
45 |
name: Open LLM Leaderboard
|
46 |
- task:
|
47 |
type: text-generation
|
@@ -57,7 +57,7 @@ model-index:
|
|
57 |
value: 85.83
|
58 |
name: normalized accuracy
|
59 |
source:
|
60 |
-
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/
|
61 |
name: Open LLM Leaderboard
|
62 |
- task:
|
63 |
type: text-generation
|
@@ -74,7 +74,7 @@ model-index:
|
|
74 |
value: 80.12
|
75 |
name: accuracy
|
76 |
source:
|
77 |
-
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/
|
78 |
name: Open LLM Leaderboard
|
79 |
- task:
|
80 |
type: text-generation
|
@@ -90,7 +90,7 @@ model-index:
|
|
90 |
- type: mc2
|
91 |
value: 62.11
|
92 |
source:
|
93 |
-
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/
|
94 |
name: Open LLM Leaderboard
|
95 |
- task:
|
96 |
type: text-generation
|
@@ -107,7 +107,7 @@ model-index:
|
|
107 |
value: 82.87
|
108 |
name: accuracy
|
109 |
source:
|
110 |
-
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/
|
111 |
name: Open LLM Leaderboard
|
112 |
- task:
|
113 |
type: text-generation
|
@@ -124,23 +124,23 @@ model-index:
|
|
124 |
value: 86.05
|
125 |
name: accuracy
|
126 |
source:
|
127 |
-
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/
|
128 |
name: Open LLM Leaderboard
|
129 |
---
|
130 |
|
131 |
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
132 |
|
133 |
|
134 |
-
# MaziyarPanahi/
|
135 |
|
136 |
This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
|
137 |
|
138 |
# ⚡ Quantized GGUF
|
139 |
|
140 |
-
All GGUF models are available here: [MaziyarPanahi/
|
141 |
|
142 |
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
143 |
-
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/
|
144 |
|
145 |
| Metric |Value|
|
146 |
|---------------------------------|----:|
|
@@ -173,7 +173,7 @@ This model uses `ChatML` prompt template:
|
|
173 |
|
174 |
# How to use
|
175 |
|
176 |
-
You can use this model by using `MaziyarPanahi/
|
177 |
transformers library.
|
178 |
|
179 |
```python
|
@@ -181,7 +181,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
|
181 |
from transformers import pipeline
|
182 |
import torch
|
183 |
|
184 |
-
model_id = "MaziyarPanahi/
|
185 |
|
186 |
model = AutoModelForCausalLM.from_pretrained(
|
187 |
model_id,
|
|
|
16 |
base_model: meta-llama/Meta-Llama-3-70B-Instruct
|
17 |
datasets:
|
18 |
- mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
|
19 |
+
model_name: calme-2.1-llama3-70b
|
20 |
pipeline_tag: text-generation
|
21 |
license_name: llama3
|
22 |
license_link: LICENSE
|
|
|
24 |
model_creator: MaziyarPanahi
|
25 |
quantized_by: MaziyarPanahi
|
26 |
model-index:
|
27 |
+
- name: calme-2.1-llama3-70b
|
28 |
results:
|
29 |
- task:
|
30 |
type: text-generation
|
|
|
41 |
value: 71.67
|
42 |
name: normalized accuracy
|
43 |
source:
|
44 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-llama3-70b
|
45 |
name: Open LLM Leaderboard
|
46 |
- task:
|
47 |
type: text-generation
|
|
|
57 |
value: 85.83
|
58 |
name: normalized accuracy
|
59 |
source:
|
60 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-llama3-70b
|
61 |
name: Open LLM Leaderboard
|
62 |
- task:
|
63 |
type: text-generation
|
|
|
74 |
value: 80.12
|
75 |
name: accuracy
|
76 |
source:
|
77 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-llama3-70b
|
78 |
name: Open LLM Leaderboard
|
79 |
- task:
|
80 |
type: text-generation
|
|
|
90 |
- type: mc2
|
91 |
value: 62.11
|
92 |
source:
|
93 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-llama3-70b
|
94 |
name: Open LLM Leaderboard
|
95 |
- task:
|
96 |
type: text-generation
|
|
|
107 |
value: 82.87
|
108 |
name: accuracy
|
109 |
source:
|
110 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-llama3-70b
|
111 |
name: Open LLM Leaderboard
|
112 |
- task:
|
113 |
type: text-generation
|
|
|
124 |
value: 86.05
|
125 |
name: accuracy
|
126 |
source:
|
127 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/calme-2.1-llama3-70b
|
128 |
name: Open LLM Leaderboard
|
129 |
---
|
130 |
|
131 |
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
132 |
|
133 |
|
134 |
+
# MaziyarPanahi/calme-2.1-llama3-70b
|
135 |
|
136 |
This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
|
137 |
|
138 |
# ⚡ Quantized GGUF
|
139 |
|
140 |
+
All GGUF models are available here: [MaziyarPanahi/calme-2.1-llama3-70b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.1-llama3-70b-GGUF)
|
141 |
|
142 |
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
143 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.1-llama3-70b)
|
144 |
|
145 |
| Metric |Value|
|
146 |
|---------------------------------|----:|
|
|
|
173 |
|
174 |
# How to use
|
175 |
|
176 |
+
You can use this model by using `MaziyarPanahi/calme-2.1-llama3-70b` as the model name in Hugging Face's
|
177 |
transformers library.
|
178 |
|
179 |
```python
|
|
|
181 |
from transformers import pipeline
|
182 |
import torch
|
183 |
|
184 |
+
model_id = "MaziyarPanahi/calme-2.1-llama3-70b"
|
185 |
|
186 |
model = AutoModelForCausalLM.from_pretrained(
|
187 |
model_id,
|