OpenChat V2 x OpenOrca Preview 2
This is a preview version of OpenChat V2 trained for 2 epochs (total 5 epochs) on full (4.5M) OpenOrca dataset.
Important Notice: Beta Release for Limited Testing Purposes Only
This release is intended solely for a small group of beta testers and is not an official release or preview. We caution against publicizing or sharing this version as it may contain bugs, errors, or incomplete features that could negatively impact performance. We are actively working on improving the model and preparing it for an official release.
Note: We have become aware that this beta release has been publicly shared on YouTube. Please be advised that the version tested on YouTube may have used parameters that could significantly harm performance. We will provide instructions on how to apply proper prompting format and sampling parameters prior to any official release to ensure optimal performance.
AGIEval Preliminary Results
OpenChat V2 OpenOrca Preview
name accuracy unmatched
aqua-rat.zero-shot 0.232283 0.0
logiqa-en.zero-shot 0.370200 0.0
lsat-ar.zero-shot 0.230435 0.0
lsat-lr.zero-shot 0.441176 0.0
lsat-rc.zero-shot 0.568773 0.0
sat-en-without-passage.zero-shot 0.393204 0.0
sat-en.zero-shot 0.747573 0.0
sat-math.zero-shot 0.295455 0.0
Average 0.409887 0.0
AGIEval Average reported in Orca paper: 0.417
Serving
This model is compatible with OpenChat V2 vLLM OpenAI API server. It can be used as a drop-in replacement for OpenChat V2 weights.
python -m ochat.serving.openai_api_server --model_type openchat_v2 --model openchat/openchat_v2_openorca_preview --engine-use-ray --worker-use-ray
Conversation Template
The conversation template involves concatenating tokens, and cannot be expressed in plain-text.
Besides base model vocabulary, an end-of-turn token <|end_of_turn|>
is added.
Here is an example of single-round conversation template:
def tokenize_single_input(tokenizer, prompt):
# OpenChat V2
human_prefix = "User:"
prefix = "Assistant GPT4:"
eot_token = "<|end_of_turn|>"
bos_token = "<s>"
def _tokenize(text):
return tokenizer.convert_tokens_to_ids(tokenizer._tokenize(text))
def _tokenize_special(special_name):
return tokenizer.convert_tokens_to_ids(special_name)
return [_tokenize_special(bos_token)] + _tokenize(human_prefix) + _tokenize(prompt) + [_tokenize_special(eot_token)] + \
_tokenize(prefix)
To explore conditional language models, you can also set prefix = "Assistant GPT3:"
to mimic ChatGPT behavior (this may cause performance degradation).
Hint: In BPE, tokenize(A) + tokenize(B)
does not always equals to tokenize(A + B)
- Downloads last month
- 0