File size: 3,025 Bytes
4d7e209
0eb5394
 
 
 
4d7e209
 
0eb5394
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
language:
- en
tags:
- llama
license: other
---

# OpenChat: Advancing Open-source Language Models with Imperfect Data

The OpenChat v2 family is inspired by offline reinforcement learning, including conditional behavior cloning (OpenChat-v2) and weighted behavior cloning (OpenChat-v2-w).

 - **[OpenChat-v2-w](https://huggingface.co/openchat/openchat_v2_w)**: ~80k cleaned ShareGPT data with conditioning and weighted loss, based on LLaMA-13B with a context length of 2048.
   - Achieves **50.9%** win-rate over ChatGPT on MT-bench.
   - Achieves **79.4%** win-rate over ChatGPT on Vicuna-bench.
   - Achieves **87.1%** win-rate over text-davinci-003 on AlpacaEval.
 - **[OpenChat-v2](https://huggingface.co/openchat/openchat_v2)**: ~80k cleaned ShareGPT data with only conditioning, based on LLaMA-13B with a context length of 2048.
   - Achieves **48.1%** win-rate over ChatGPT on MT-bench.
   - Achieves **80.6%** win-rate over ChatGPT on Vicuna-bench.
   - Achieves **85.0%** win-rate over text-davinci-003 on AlpacaEval.

## Code and Inference Server

We provide the full source code, including an inference server compatible with the "ChatCompletions" API, in the [OpenChat](https://github.com/imoneoi/openchat) GitHub repository.

## Web UI

OpenChat also includes a web UI for a better user experience. See the GitHub repository for instructions.

## Conversation Template

The conversation template **involves concatenating tokens**, and cannot be expressed in plain-text.

Besides base model vocabulary, an end-of-turn token `<|end_of_turn|>` is added.

Here is an example of single-round conversation template:

```python
def tokenize_single_input(tokenizer, prompt):
    # OpenChat V2
    human_prefix = "User:"
    prefix    = "Assistant GPT4:"
    eot_token = "<|end_of_turn|>"
    bos_token = "<s>"

    def _tokenize(text):
        return tokenizer.convert_tokens_to_ids(tokenizer._tokenize(text))

    def _tokenize_special(special_name):
        return tokenizer.convert_tokens_to_ids(special_name)
    
    return [_tokenize_special(bos_token)] + _tokenize(human_prefix) + _tokenize(prompt) + [_tokenize_special(eot_token)] + \
           _tokenize(prefix)
```

To explore conditional language models, you can also set `prefix = "Assistant GPT3:"` to mimic ChatGPT behavior (this may cause performance degradation).

*Hint: In BPE, `tokenize(A) + tokenize(B)` does not always equals to `tokenize(A + B)`*

## Limitations

**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:

 - Complex reasoning
 - Mathematical and arithmetic tasks
 - Programming and coding challenges

**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.