File size: 1,460 Bytes
ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 e3f68a3 ac0f8b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
license: apache-2.0
datasets:
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
---
<img src=https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B/resolve/main/platyboros.png>
# Platyboros-Instruct-7B
[Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) trained with [jondurbin/airoboros-3.2](https://huggingface.co/datasets/jondurbin/airoboros-3.2) and [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus), in Alpaca format.
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Platyboros-Instruct-7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
|