YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

RedPajama-INCITE-Chat-Instruct-3B-V1 - GGUF

Original model description:

language: - en license: apache-2.0 library_name: transformers datasets: - togethercomputer/RedPajama-Data-1T - databricks/databricks-dolly-15k - OpenAssistant/oasst1 - Muennighoff/natural-instructions - Muennighoff/P3 pipeline_tag: text-generation model-index: - name: RedPajama-INCITE-Chat-Instruct-3B-V1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 42.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 67.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 33.62 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 64.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard

Buy Me A Coffee

This is an experimental merge of models RedPajama-INCITE-Chat-3B-V1 and RedPajama-INCITE-Instruct-3B-V1.
This model is adaptive to prompt templates, but this template is recommended:

HUMAN: {prompt}
ASSISTANT:

Feel free to change HUMAN or ASSISTANT. It will not change much.
GGML versions here (Note that this is only compatible with koboldcpp).

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 39.23
ARC (25-shot) 42.58
HellaSwag (10-shot) 67.48
MMLU (5-shot) 25.99
TruthfulQA (0-shot) 33.62
Winogrande (5-shot) 64.8
GSM8K (5-shot) 0.91

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 39.23
AI2 Reasoning Challenge (25-Shot) 42.58
HellaSwag (10-Shot) 67.48
MMLU (5-Shot) 25.99
TruthfulQA (0-shot) 33.62
Winogrande (5-shot) 64.80
GSM8k (5-shot) 0.91
Downloads last month
3
GGUF
Model size
2.78B params
Architecture
gptneox

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.