|
--- |
|
base_model: teknium/OpenHermes-2-Mistral-7B |
|
tags: |
|
- mistral |
|
- instruct |
|
- finetune |
|
- chatml |
|
- gpt4 |
|
- synthetic data |
|
- distillation |
|
- license:apache-2.0 |
|
- autotrain_compatible |
|
- endpoints_compatible |
|
- text-generation-inference |
|
- quantized |
|
- 4-bit |
|
- AWQ |
|
- transformers |
|
- pytorch |
|
model-index: |
|
- name: OpenHermes-2-Mistral-7B |
|
results: [] |
|
license: apache-2.0 |
|
language: |
|
- en |
|
datasets: |
|
- teknium/OpenHermes-2.5 |
|
library_name: transformers |
|
model_creator: teknium |
|
model_name: OpenHermes-2-Mistral-7B |
|
model_type: mistral |
|
pipeline_tag: text-generation |
|
inference: false |
|
prompt_template: '<|im_start|>system |
|
|
|
{system_message}<|im_end|> |
|
|
|
<|im_start|>user |
|
|
|
{prompt}<|im_end|> |
|
|
|
<|im_start|>assistant |
|
|
|
' |
|
quantized_by: Suparious |
|
--- |
|
# OpenHermes 2.5 - Mistral 7B AWQ |
|
|
|
- Model creator: [teknium](https://huggingface.co/teknium) |
|
- Original model: [OpenHermes-2-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2-Mistral-7B) |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ox7zGoygsJQFFV3rLT4v9.png) |
|
|
|
## Model Author's Description |
|
|
|
OpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets. |
|
|
|
Potentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant. |
|
|
|
The code it trained on also improved it's humaneval score (benchmarking done by Glaive team) from **43% @ Pass 1** with Open Herms 2 to **50.7% @ Pass 1** with Open Hermes 2.5. |
|
|
|
OpenHermes was trained on 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape. [More details soon] |
|
|
|
Filtering was extensive of these public datasets, as well as conversion of all formats to ShareGPT, which was then further transformed by axolotl to use ChatML. |
|
|
|
Huge thank you to [GlaiveAI](https://twitter.com/glaiveai) and [a16z](https://twitter.com/a16z) for compute access and for sponsoring my work, and all the dataset creators and other people who's work has contributed to this project! |
|
|
|
Follow all my updates in ML and AI on Twitter: https://twitter.com/Teknium1 |
|
|
|
Support me on Github Sponsors: https://github.com/sponsors/teknium1 |
|
|
|
**NEW**: Chat with Hermes on LMSys' Chat Website! https://chat.lmsys.org/?single&model=openhermes-2.5-mistral-7b |
|
|
|
## How to use |
|
|
|
### Install the necessary packages |
|
|
|
```bash |
|
pip install --upgrade autoawq autoawq-kernels |
|
``` |
|
|
|
### Example Python code |
|
|
|
```python |
|
from awq import AutoAWQForCausalLM |
|
from transformers import AutoTokenizer, TextStreamer |
|
|
|
model_path = "solidrust/OpenHermes-2-Mistral-7B-AWQ" |
|
system_message = "You are Senzu, incarnated as a powerful AI." |
|
|
|
# Load model |
|
model = AutoAWQForCausalLM.from_quantized(model_path, |
|
fuse_layers=True) |
|
tokenizer = AutoTokenizer.from_pretrained(model_path, |
|
trust_remote_code=True) |
|
streamer = TextStreamer(tokenizer, |
|
skip_prompt=True, |
|
skip_special_tokens=True) |
|
|
|
# Convert prompt to tokens |
|
prompt_template = """\ |
|
<|im_start|>system |
|
{system_message}<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant""" |
|
|
|
prompt = "You're standing on the surface of the Earth. "\ |
|
"You walk one mile south, one mile west and one mile north. "\ |
|
"You end up exactly where you started. Where are you?" |
|
|
|
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), |
|
return_tensors='pt').input_ids.cuda() |
|
|
|
# Generate output |
|
generation_output = model.generate(tokens, |
|
streamer=streamer, |
|
max_new_tokens=512) |
|
|
|
``` |
|
|
|
### About AWQ |
|
|
|
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. |
|
|
|
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. |
|
|
|
It is supported by: |
|
|
|
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ |
|
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. |
|
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) |
|
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers |
|
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code |
|
|
|
## Prompt template: ChatML |
|
|
|
```plaintext |
|
<|im_start|>system |
|
{system_message}<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
|