lodrick-the-lafted's picture
Update README.md
63854a2 verified
---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-100K
---
<img src=https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-100K/resolve/main/hermes-instruct.png>
# Hermes-Instruct-7B-v0.2
[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with 100K rows of [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes), in Alpaca format.
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer defaults to Mistral-style.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Hermes-Instruct-7B-100K"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```