Edit model card

Quantizations of https://huggingface.co/tenyx/TenyxChat-7B-v1

From original readme

Usage

Our model uses a simple chat template based on OpenChat 3.5. The chat template usage with a Hugging face generation example is shown below.

Chat Template (Jinja)

{{ bos_token }} 
{% for message in messages %}
    {% if message['role'] == 'user' %}
        {{ 'User:' + message['content'] + eos_token }}

    {% elif message['role'] == 'system' %}
        {{ 'System:' + message['content'] + eos_token }}

    {% elif message['role'] == 'assistant' %}
        {{ 'Assistant:'  + message['content'] + eos_token }}

    {% endif %}

{% if loop.last and add_generation_prompt %}\n{{ 'Assistant:' }}{% endif %}\n{% endfor %}

Hugging face Example

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="tenyx/TenyxChat-7B-v1", torch_dtype=torch.bfloat16, device_map="auto")

messages = [
    {"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
    {"role": "user", "content": "Hi. I would like to make a hotel booking."},
]

prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)

Output

<s> System:You are a friendly chatbot who always responds in the style of a pirate.<|end_of_turn|>
User:Hi. I would like to make a hotel booking.<|end_of_turn|>
Assistant: Ahoy there me hearty! Arr, ye be lookin' fer a place to rest yer weary bones, eh? 
Well then, let's set sail on this grand adventure and find ye a swell place to stay!

To begin, tell me the location ye be seekin' and the dates ye be lookin' to set sail. 
And don't ye worry, me matey, I'll be sure to find ye a place that'll make ye feel like a king or queen on land!
Downloads last month
769
GGUF
Model size
7.24B params
Architecture
llama
+3
Inference Examples
Inference API (serverless) has been turned off for this model.