Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Pandora-13B-v1 - GGUF

Original model description:

license: apache-2.0 language: - en

WARNING

This is a model file only for evaluation. Please use the model here:

Jan banner

Jan - Discord

Model Description

This model uses the passthrough merge method from the best 7B models on the OpenLLM Leaderboard:

  1. viethq188/LeoScorpius-7B-Chat-DPO
  2. GreenNode/GreenNodeLM-7B-v1olet

The yaml config file for this model is here:

slices:
  - sources:
    - model: "viethq188/LeoScorpius-7B-Chat-DPO"
      layer_range: [0, 24]
  - sources:
    - model: "GreenNode/GreenNodeLM-7B-v1olet"
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16

Prompt template

  • ChatML
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Run this model

You can run this model using Jan on Mac, Windows, or Linux.

Jan is an open source, ChatGPT alternative that is:

๐Ÿ’ป 100% offline on your machine: Your conversations remain confidential, and visible only to you.

๐Ÿ—‚๏ธ An Open File Format: Conversations and model settings stay on your computer and can be exported or deleted at any time.

๐ŸŒ OpenAI Compatible: Local server on port 1337 with OpenAI compatible endpoints

๐ŸŒ Open Source & Free: We build in public; check out our Github

image/png

About Jan

Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.

Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.

Jan Model Merger

This is a test project for merging models.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here.

Metric Value
Avg. ?
ARC (25-shot) ?
HellaSwag (10-shot) ?
MMLU (5-shot) ?
TruthfulQA (0-shot) ?
Winogrande (5-shot) ?
GSM8K (5-shot) ?

Acknowlegement

Downloads last month
7
GGUF
Model size
12.5B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .