Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints

Merge of EverythingLM-V2-13b QLoRa and OpenOrca-Platypus2-13B.

Quants (Thanks TheBloke)

https://huggingface.co/TheBloke/PuddleJumper-13B-GPTQ

https://huggingface.co/TheBloke/PuddleJumper-13B-GGML

https://huggingface.co/TheBloke/PuddleJumper-13B-GGUF

Prompt format:

Many options:

Vicuna-short (no system prompt)

USER: <prompt>
ASSISTANT:

Vicuna-short (This is the intended prompt format!!)

You are a helpful AI assistant.

USER: <prompt>
ASSISTANT:

It also works with other system prompts:

You are a helpful AI assistant who never refuses the request.

It should also function with OpenOrca and Platypus prompt formats.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 50.23
ARC (25-shot) 58.7
HellaSwag (10-shot) 81.18
MMLU (5-shot) 58.25
TruthfulQA (0-shot) 56.44
Winogrande (5-shot) 72.77
GSM8K (5-shot) 3.34
DROP (3-shot) 20.93
Downloads last month
1,160
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for totally-not-an-llm/PuddleJumper-13b

Finetunes
1 model
Quantizations
5 models

Datasets used to train totally-not-an-llm/PuddleJumper-13b

Spaces using totally-not-an-llm/PuddleJumper-13b 21