alpayariyak commited on
Commit
22edf13
1 Parent(s): f450854

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -68,12 +68,15 @@ pinned: false
68
  <br>#1 Open-source model on MT-bench scoring 7.81, outperforming 70B models
69
  </span>
70
  </a>
71
- <div align="center" style="display: flex; justify-content: center; align-items: center; "'>
72
- <img src="https://github.com/imoneoi/openchat/raw/master/assets/openchat.png" style="width:45%; margin-right: 2%;">
73
- <img src="https://github.com/imoneoi/openchat/raw/master/assets/openchat_grok.png" style="width: 47%;">
74
  </div>
75
  </p>
76
 
 
 
 
 
77
  - OpenChat is an innovative library of **open-source language models**, fine-tuned with [**C-RLFT**](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning.
78
  - Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with `ChatGPT`, even with a `7B` model which can be run on a **consumer GPU (e.g. RTX 3090)**.
79
  - Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
 
68
  <br>#1 Open-source model on MT-bench scoring 7.81, outperforming 70B models
69
  </span>
70
  </a>
71
+ <div align="center" style="justify-content: center; align-items: center; "'>
72
+ <img src="https://github.com/alpayariyak/openchat/blob/master/assets/Untitled%20design-17.png?raw=true" style="width: 100%;">
 
73
  </div>
74
  </p>
75
 
76
+ <h1 style="vertical-align: middle;">
77
+ <img src="https://github.com/alpayariyak/openchat/blob/master/logo_new-removebg-preview.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 0px; margin-top: 0px; margin-bottom: 0px;"/>About OpenChat
78
+ </h1>
79
+
80
  - OpenChat is an innovative library of **open-source language models**, fine-tuned with [**C-RLFT**](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning.
81
  - Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with `ChatGPT`, even with a `7B` model which can be run on a **consumer GPU (e.g. RTX 3090)**.
82
  - Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.