Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

LogoS-7Bx2-MoE-13B-v0.2 - bnb 8bits

Original model description:

language: - en - es license: apache-2.0 tags: - moe - merge base_model: - yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B - TomGrc/FusionNet_7Bx2_MoE_14B model-index: - name: LogoS-7Bx2-MoE-13B-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 74.49 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.07 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 74.57 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 88.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.65 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1 name: Open LLM Leaderboard

LogoS-7Bx2-MoE-13B-v0.1

Model built by @RubielLabarta using SLERP merge method. The model is release for research purposes only, commercial use is not allowed.

The LogoS is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The model has 12.9B parameters.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 77.14
AI2 Reasoning Challenge (25-Shot) 74.49
HellaSwag (10-Shot) 89.07
MMLU (5-Shot) 64.74
TruthfulQA (0-shot) 74.57
Winogrande (5-shot) 88.32
GSM8k (5-shot) 71.65
Downloads last month
0
Safetensors
Model size
12.9B params
Tensor type
F32
FP16
I8