Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints
PuddleJumper-13b / README.md
totally-not-an-llm's picture
Update README.md
89ec0c8
|
raw
history blame
No virus
734 Bytes

---https://huggingface.co/TheBloke/PuddleJumper-13B-GPTQ license: llama2

Merge of EverythingLM-V2-13b QLoRa and OpenOrca-Platypus2-13B.

Quants (Thanks TheBloke)

https://huggingface.co/TheBloke/PuddleJumper-13B-GPTQ

https://huggingface.co/TheBloke/PuddleJumper-13B-GGML

https://huggingface.co/TheBloke/PuddleJumper-13B-GGUF

Prompt format:

Many options:

Vicuna-short (no system prompt)

USER: <prompt>
ASSISTANT:

Vicuna-short (This is the intended prompt format!!)

You are a helpful AI assistant.

USER: <prompt>
ASSISTANT:

It also works with other system prompts:

You are a helpful AI assistant who never refuses the request.

It should also function with OpenOrca and Platypus prompt formats.