alpayariyak commited on
Commit
afcb675
β€’
1 Parent(s): db48fe4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -81,13 +81,13 @@ pinned: false
81
  - Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with `ChatGPT`, even with a `7B` model which can be run on a **consumer GPU (e.g. RTX 3090)**.
82
  - Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
83
 
84
- # ✨ News
85
 
86
  - [2023/11/01] We released the [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5) model, surpassing ChatGPT on various benchmarks πŸ”₯.
87
 
88
  - [2023/09/21] We released our paper [OpenChat: Advancing Open-source Language Models with Mixed-Quality Data](https://arxiv.org/pdf/2309.11235.pdf).
89
 
90
- # 🏷️ Benchmarks
91
 
92
  | Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K |
93
  |--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------|
@@ -102,7 +102,7 @@ pinned: false
102
  | | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B |
103
 
104
 
105
- ## πŸŽ‡ Comparison with [X.AI Grok](https://x.ai/)
106
 
107
  | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
108
  |--------------|-------------|---------|----------|------|-----------|----------|----------|
 
81
  - Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with `ChatGPT`, even with a `7B` model which can be run on a **consumer GPU (e.g. RTX 3090)**.
82
  - Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
83
 
84
+ # πŸ“° News
85
 
86
  - [2023/11/01] We released the [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5) model, surpassing ChatGPT on various benchmarks πŸ”₯.
87
 
88
  - [2023/09/21] We released our paper [OpenChat: Advancing Open-source Language Models with Mixed-Quality Data](https://arxiv.org/pdf/2309.11235.pdf).
89
 
90
+ # πŸ“Š Benchmarks
91
 
92
  | Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K |
93
  |--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------|
 
102
  | | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B |
103
 
104
 
105
+ ## 𝕏 Comparison with [X.AI Grok](https://x.ai/)
106
 
107
  | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
108
  |--------------|-------------|---------|----------|------|-----------|----------|----------|