Text Generation
Transformers
PyTorch
Japanese
llama
text-generation-inference
Inference Endpoints

Chatting and prompt

#1
by kt999 - opened

Can we use this model for chatting?

If yes, how can we do it?
Do we need add stop string like <|endoftext|> as follows?
Where do we put it?

f"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
{response}<|endoftext|>
### Input:
{input}
### Response:
"""
Lightblue KK. org

It hasn't been trained for chatting, but this might work! But I'd suggest that you use one of our newer chat models (Karasu or Qarasu) instead, as they have been trained explicitly for chatting and just perform a lot better than this model.

Here is our 14B parameter model:
https://huggingface.co/lightblue/qarasu-14B-chat-plus-unleashed

And our 7B parameter model:
https://huggingface.co/lightblue/karasu-7B-chat-plus-unleashed

We'll try that. Thank you!

Sign up or log in to comment