--- title: README emoji: πŸš€ colorFrom: blue colorTo: blue sdk: static pinned: false ---

Advancing Open-source Language Models with Mixed-Quality Data

OpenChat Logo Online Demo | GitHub Logo GitHub | ArXiv Logo Paper | Discord Logo Discord


OPENCHAT 3.5
First 7B Model that Achieves ChatGPT-Level Performance
#1 Open-Source Model on MT-bench scoring 7.81, outperforming 70B models

OpenChat LogoAbout OpenChat

- OpenChat is an innovative library of **open-source language models**, fine-tuned with [**C-RLFT**](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. - Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with `ChatGPT`, even with a `7B` model which can be run on a **consumer GPU (e.g. RTX 3090)**. - Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision. # πŸ“° News - [2023/12/10] We released the [OpenChat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) model, 15-point improvements in coding. - [2023/11/01] We released the [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5) model, surpassing ChatGPT on various benchmarks πŸ”₯. - [2023/09/21] We released our paper [OpenChat: Advancing Open-source Language Models with Mixed-Quality Data](https://arxiv.org/pdf/2309.11235.pdf). # πŸ“Š Benchmarks | Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K | |--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------| | OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** | | ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 | | | | | | | | | | | | | | OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 | | OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 | | Zephyr-Ξ²^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 | | Mistral** | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 | | Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 | | | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B | ## 𝕏 Comparison with [X.AI Grok](https://x.ai/) | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k | |--------------|-------------|---------|----------|------|-----------|----------|----------| | OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** | | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 | | Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 | # πŸ’ŒContact We are a student team Tsinghua University, working on OpenChat, a project that requires additional computing power or LLMs API keys for further development. If you are interested in our project and would like to offer support, please feel free to reach out to us: * Wang Guan [imonenext at gmail dot com] * Cheng Sijie [csj23 at mails dot tsinghua dot edu dot cn] We look forward to hearing you and collaborating on this exciting project!