Text Generation
Transformers
PyTorch
mistral
openchat
C-RLFT
conversational
Inference Endpoints
text-generation-inference
imone commited on
Commit
2d39b2e
1 Parent(s): 37ca7ce

Update model card metadata

Browse files
Files changed (1) hide show
  1. README.md +15 -1
README.md CHANGED
@@ -1,5 +1,19 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  # OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
@@ -174,4 +188,4 @@ We extend our heartfelt gratitude to AutoMeta and caesus from Alignment Lab AI,
174
 
175
  Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions.
176
 
177
- Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - openchat
5
+ - mistral
6
+ - C-RLFT
7
+ datasets:
8
+ - openchat/openchat_sharegpt4_dataset
9
+ - Open-Orca/OpenOrca
10
+ - LDJnr/LessWrong-Amplify-Instruct
11
+ - LDJnr/Pure-Dove
12
+ - LDJnr/Verified-Camel
13
+ - tiedong/goat
14
+ - glaiveai/glaive-code-assistant
15
+ library_name: transformers
16
+ pipeline_tag: text-generation
17
  ---
18
 
19
  # OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
 
188
 
189
  Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions.
190
 
191
+ Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.