pansophic commited on
Commit
66ef431
1 Parent(s): 7e47ac5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -14
README.md CHANGED
@@ -63,6 +63,20 @@ In AlpacaEval, Rocket 🦝 achieves a near 80% win rate, coupled with an average
63
  | **Rocket** 🦝 | **79.75** | **1.42** | **1242** |
64
 
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ## Intended uses & limitations
67
  Initially, we fine-tuned the model using a dataset created by merging and curating multiple datasets, available on the HuggingFace Hub. This dataset will be released to the public soon. We further enhanced the model's performance using DPO, selecting samples from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) and [BAAI/JudgeLM-100K](https://huggingface.co/datasets/BAAI/JudgeLM-100K) datasets. The outcome is a highly effective chat model with a 3 billion parameter scale.
68
 
@@ -128,17 +142,4 @@ The pretraining dataset is comprised of a filtered mixture of open-source large-
128
 
129
  **The model name is inspired by the small but formidable character from 'Guardians of the Galaxy'. Similar to its namesake, this model, with its 3 billion parameters, showcases remarkable efficiency and effectiveness, challenging larger models despite its smaller size."*
130
 
131
- *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md) and [Tulu-2-7B](https://huggingface.co/allenai/tulu-2-7b/blob/main/README.md)*
132
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
133
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pansophic__rocket-3B)
134
-
135
- | Metric |Value|
136
- |---------------------------------|----:|
137
- |Avg. |55.77|
138
- |AI2 Reasoning Challenge (25-Shot)|50.60|
139
- |HellaSwag (10-Shot) |76.69|
140
- |MMLU (5-Shot) |47.10|
141
- |TruthfulQA (0-shot) |55.82|
142
- |Winogrande (5-shot) |67.96|
143
- |GSM8k (5-shot) |36.47|
144
-
 
63
  | **Rocket** 🦝 | **79.75** | **1.42** | **1242** |
64
 
65
 
66
+ ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
67
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pansophic__rocket-3B)
68
+
69
+ | Metric |Value|
70
+ |---------------------------------|----:|
71
+ |Avg. |55.77|
72
+ |AI2 Reasoning Challenge (25-Shot)|50.60|
73
+ |HellaSwag (10-Shot) |76.69|
74
+ |MMLU (5-Shot) |47.10|
75
+ |TruthfulQA (0-shot) |55.82|
76
+ |Winogrande (5-shot) |67.96|
77
+ |GSM8k (5-shot) |36.47|
78
+
79
+
80
  ## Intended uses & limitations
81
  Initially, we fine-tuned the model using a dataset created by merging and curating multiple datasets, available on the HuggingFace Hub. This dataset will be released to the public soon. We further enhanced the model's performance using DPO, selecting samples from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) and [BAAI/JudgeLM-100K](https://huggingface.co/datasets/BAAI/JudgeLM-100K) datasets. The outcome is a highly effective chat model with a 3 billion parameter scale.
82
 
 
142
 
143
  **The model name is inspired by the small but formidable character from 'Guardians of the Galaxy'. Similar to its namesake, this model, with its 3 billion parameters, showcases remarkable efficiency and effectiveness, challenging larger models despite its smaller size."*
144
 
145
+ *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md) and [Tulu-2-7B](https://huggingface.co/allenai/tulu-2-7b/blob/main/README.md)*