leaderboard-pr-bot's picture
Adding Evaluation Results
a85c0df verified
|
raw
history blame
5.9 kB
metadata
language:
  - en
license: apache-2.0
datasets:
  - Intel/orca_dpo_pairs
model-index:
  - name: mistral-7b-dpo-merge-v1.1
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 72.53
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mncai/mistral-7b-dpo-merge-v1.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 88.15
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mncai/mistral-7b-dpo-merge-v1.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64.83
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mncai/mistral-7b-dpo-merge-v1.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 68.48
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mncai/mistral-7b-dpo-merge-v1.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 82.32
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mncai/mistral-7b-dpo-merge-v1.1
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 70.89
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mncai/mistral-7b-dpo-merge-v1.1
          name: Open LLM Leaderboard

Model Card for mncai/mistral-7b-dpo-merge-v1.1

Introduction of MindsAndCompany

https://mnc.ai/

We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).

Model Summary

based mistral, instruction tuned and dpo.

merge mncai/mistral-7b-dpo-v6, rwitz2/go-bruins-v2.1.1, ignos/LeoScorpius-GreenNode-Alpaca-7B-v1, janai-hq/trinity-v1 .

Details

ties

models:
  - model: rwitz2/go-bruins-v2.1.1
    # no parameters necessary for base model
  - model: janai-hq/trinity-v1 # psmathur/orca_mini_v3_13b
    parameters:
      density: [1, 0.7, 0.1] # density gradient
      weight: 1.0
  - model: ignos/LeoScorpius-GreenNode-Alpaca-7B-v1
    parameters:
      density: 0.5
      weight: [0, 0.3, 0.7, 1] # weight gradient
  - model: mncai/mistral-7b-dpo-v6
    parameters:
      density: 0.33
      weight:
        - filter: mlp
          value: 0.5
        - value: 0
merge_method: ties
base_model: rwitz2/go-bruins-v2.1.1
parameters:
  normalize: true
  int8_mask: true
dtype: float16

How to Use

Here give some examples of how to use our model.

from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/mistral-7b-dpo-merge-v1' 
message = "<|user|>\n๋‘ ๊ฐœ์˜ ๊ตฌ๊ฐ€ ์žˆ๋Š”๋ฐ ๊ฐ๊ฐ ์ง€๋ฆ„์ด 1, 2์ผ๋•Œ ๊ตฌ์˜ ๋ถ€ํ”ผ๋Š” ๋ช‡๋ฐฐ ์ฐจ์ด๊ฐ€ ๋‚˜์ง€? ์„ค๋ช…๋„ ๊ฐ™์ด ํ•ด์ค˜.\n<|assistant|>\n"

sequences = pipeline(
    message,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=2048,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Warnings

Currently, the leaderboard is overfitted. It is inevitable because, unlike Kaggle, where there's private scoring followed by the end of the competition, here the scores are continuously open. Even among my models, some received lower scores in internal data evaluations. mncai/agiin-13.6B-v0.1 > mncai/agiin-11.1B-v0.1 > mncai/mistral-7b-dpo-v6. However, on the leaderboard, mncai/mistral-7b-dpo-v6 has the highest score. When choosing a model to use on the open LLM leaderboard, it would be best to evaluate with your own private dataset that is not publicly available.

Contact

If you have any questions, please raise an issue or contact us at dwmyoung@mnc.ai

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.53
AI2 Reasoning Challenge (25-Shot) 72.53
HellaSwag (10-Shot) 88.15
MMLU (5-Shot) 64.83
TruthfulQA (0-shot) 68.48
Winogrande (5-shot) 82.32
GSM8k (5-shot) 70.89